date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/15
| 338 | 1,207 |
<issue_start>username_0: Lets say this is the result of first select statement
```
topic
a
b
c
```
and this is the result of second select statement
```
user
d
e
```
How to combine this two colums into ONE column with a new column name? I am trying to use `UNION` however its not working
```
SELECT topic FROM topic_table WHERE topic LIKE '%'
UNION
SELECT user FROM user_table WHERE topic LIKE '%'
```
and my desired column would be like
```
desired_column
a
b
c
d
e
```<issue_comment>username_1: Your desired results should come from this query:
```
SELECT topic as desired_column FROM topic_table WHERE topic LIKE '%'
UNION ALL
SELECT user FROM user_table WHERE topic LIKE '%';
```
I changed the `UNION` to `UNION ALL` so the query does not remove duplicates (your question says nothing about that).
The `WHERE` clauses are rather non-sensical. It is better to just use `WHERE topic IS NOT NULL`.
Upvotes: 0 <issue_comment>username_2: name your topic columns equally:
```
SELECT topic as topic FROM a WHERE topic LIKE '%'
UNION ALL
SELECT other_topic as topic FROM b WHERE other_topic LIKE '%';
```
Playground:
<http://sqlfiddle.com/#!9/66ea3e/1>
Upvotes: 1
|
2018/03/15
| 337 | 1,249 |
<issue_start>username_0: I've started using Java, and this has made my Git usage a bit more of a pain. In many Java setups the files are nested very deep, for example:
```
src/main/java/com/example/Main.java
```
So I can't use tab-complete when Git adding something, and I have to type all these directories. Is there any sort of Git setting that can help with this? For example if Git knows that there's only one Main.java that has been added, then it would be nice if `git add Main.java` just worked, but it doesn't. Is there anything like that?<issue_comment>username_1: Your desired results should come from this query:
```
SELECT topic as desired_column FROM topic_table WHERE topic LIKE '%'
UNION ALL
SELECT user FROM user_table WHERE topic LIKE '%';
```
I changed the `UNION` to `UNION ALL` so the query does not remove duplicates (your question says nothing about that).
The `WHERE` clauses are rather non-sensical. It is better to just use `WHERE topic IS NOT NULL`.
Upvotes: 0 <issue_comment>username_2: name your topic columns equally:
```
SELECT topic as topic FROM a WHERE topic LIKE '%'
UNION ALL
SELECT other_topic as topic FROM b WHERE other_topic LIKE '%';
```
Playground:
<http://sqlfiddle.com/#!9/66ea3e/1>
Upvotes: 1
|
2018/03/15
| 1,043 | 4,152 |
<issue_start>username_0: I have a page where I update two fields and successfully get the state variables updated, to a point. The last update reflects in the render part, but can not pass beyond that. What am I doing wrong? Am I missing something? Suggestions?
When typing to the **height** field the corresponding state is updated, same goes for the **weight** field, and rendering this is ok. But when I try to make the **calculation**, I can not do so accurately since I am missing the last update on any one of the mentioned fields.
```
// index.jsx
import React from "react";
import ReactDOM from "react-dom";
class BmiReactForm extends React.Component {
constructor(props) {
super(props);
this.state = {
height: 0,
weight: 0,
factor: 'Unknown',
category: 'Undefined'
};
this.handleChange = this.handleChange.bind(this);
}
handleChange(input, value) {
this.setState({[input]: value});
const height = parseFloat(this.state.height).toFixed(2);
const weight = parseFloat(this.state.weight).toFixed(2);
this.bmiCalc(height, weight);
}
bmiCalc(height, weight) {
if ((height > 0) && (weight > 0)) {
let bmiNumber = weight/(height * height);
let bmiString = 'Undefined'
switch(true) {
case (bmiNumber < 15):
bmiString = 'Very severely underweight';
break;
case (bmiNumber >= 15 && bmiNumber < 16):
bmiString = 'Severely underweight';
break;
case (bmiNumber >= 16 && bmiNumber < 18.5):
bmiString = 'Underweight';
break;
case (bmiNumber >= 18.5 && bmiNumber < 25):
bmiString = 'Normal (healthy weight)';
break;
case (bmiNumber >= 25 && bmiNumber < 30):
bmiString = 'Overweight';
break;
case (bmiNumber >= 30 && bmiNumber < 35):
bmiString = 'Obese Class I (Moderately obese)';
break;
case (bmiNumber >= 35 && bmiNumber < 40):
bmiString = 'Obese Class II (Severely obese)';
break;
case (bmiNumber >= 40):
bmiString = 'Obese Class III (Very severely obese)';
break;
}
this.setState({factor: bmiNumber});
this.setState({category: bmiString});
} else {
this.setState({factor: 'Unknown'});
this.setState({category: 'Undefined'});
}
}
render() {
const factor = this.state.factor;
const category = this.state.category;
return (
Height this.handleChange('height', e.target.value)} />
Weight this.handleChange('weight', e.target.value)} />
Factor: {factor}
Category: {category}
);
}
}
ReactDOM.render(, document.getElementById("BmiReactForm"));
```
What is needed to **guarantee** the calculation will always take into account the latest update?
Thanks for your time and effort.<issue_comment>username_1: call `this.bmiCalc` in the callback to `setState`
e.g.
```
this.setState({[ input ]}: value }, () => {
const height = parseFloat(this.state.height).toFixed(2);
const weight = parseFloat(this.state.weight).toFixed(2);
this.bmiCalc(height, weight);
});
```
This ensures that the state transition occurs BEFORE you attempt a calculation.
If you're not using ES6 (you are, but just in case anyway)
```
this.setState({[ input ]}: value }, function() {
const height = parseFloat(this.state.height).toFixed(2);
const weight = parseFloat(this.state.weight).toFixed(2);
this.bmiCalc(height, weight);
});
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Use `this.setState( ( prevState, props ) => ({}) )` method at the place of `this.setState({})`. Because it's not ensuring that the you are getting a new updated state every time.
Upvotes: 0
|
2018/03/15
| 975 | 2,788 |
<issue_start>username_0: I have an array with the following format
```
let array = [{id: 1, desc: 'd1', children:[{id:1.1, desc:'d1.1', children:[]}]},
{id:2, desc:'d2', children:[] },
{id:3, desc:'d3', children:[] }];
```
Where each child is of the same time as the parent element. I would like it to transform it into an object with the format `{ [id]: {values} }`:
```
{
1: { id: 1, desc: 'd1', children: {1.1: {id:1.1, desc:'d1.1'}},
2: { id:2, desc:'d2' },
3: { id:3, desc:'d3' }
}
```
I tried in many ways but with no success. For instance:
```
let obj = array.map(a => mapArrayToObj(a));
mapArrayToObj = (e) => {
let obj = {[e.id]: e };
if(e.children.lenght > 0){
e.children = e.children.map(c => mapArrayToObj(c));
}
else{
return {[e.id]: e };
}
}
```
Is it even feasible in Javascript?<issue_comment>username_1: Let's try with this code...
```
let array = [{id: 1, desc: 'd1', children:[{id:1.1, desc:'d1.1', children:[]}]},
{id:2, desc:'d2', children:[] },
{id:3, desc:'d3', children:[] }]
let object = {}
array.forEach(item => {
let children = item.children
object[item.id] = item
object[item.id].children = {}
children.forEach(child => {
object[item.id].children[child.id] = child
})
})
console.log(object)
```
Result:
```
{ '1': { id: 1, desc: 'd1', children: { '1.1': [Object] } },
'2': { id: 2, desc: 'd2', children: {} },
'3': { id: 3, desc: 'd3', children: {} } }
```
Upvotes: 1 <issue_comment>username_2: You could use a recursive function which generates an object out of the given items without mutating the original data.
```js
function getObjects(array) {
var object = {};
array.forEach(function (item) {
object[item.id] = Object.assign({}, item, { children: getObjects(item.children) });
});
return object;
}
var array = [{ id: 1, desc: 'd1', children: [{ id: 1.1, desc: 'd1.1', children: [] }] }, { id: 2, desc: 'd2', children: [] }, { id: 3, desc: 'd3', children: [] }];
console.log(getObjects(array));
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You could use `reduce()` method to create recursive function and return object object as a result.
```js
let array = [{id: 1, desc: 'd1', children:[{id:1.1, desc:'d1.1', children:[]}]}, {id:2, desc:'d2', children:[] }, {id:3, desc:'d3', children:[] }]
function build(data) {
return data.reduce(function(r, {id, desc, children}) {
const e = {id, desc}
if(children && children.length) e.children = build(children);
r[id] = e
return r;
}, {})
}
const result = build(array)
console.log(result)
```
Upvotes: 1
|
2018/03/15
| 793 | 2,421 |
<issue_start>username_0: I have an ImageView:
```
```
And in Activity I do the following:
```
ImageView ivBgHand = (ImageView) findViewById(R.id.ivHand);
Bitmap bitmap = BitmapFactory.decodeFile(photoPath); // Path from storage
ivBgHand.setImageBitmap(bitmap);
```
I want to make zoom in and out using Matrix and x, y coordinates where it should be zoomed (aka pivotX, pivotY). Can anyone help me with this?
(I don't want to use ScaleAnimation, because the next step will be a handling changed Matrix)<issue_comment>username_1: Let's try with this code...
```
let array = [{id: 1, desc: 'd1', children:[{id:1.1, desc:'d1.1', children:[]}]},
{id:2, desc:'d2', children:[] },
{id:3, desc:'d3', children:[] }]
let object = {}
array.forEach(item => {
let children = item.children
object[item.id] = item
object[item.id].children = {}
children.forEach(child => {
object[item.id].children[child.id] = child
})
})
console.log(object)
```
Result:
```
{ '1': { id: 1, desc: 'd1', children: { '1.1': [Object] } },
'2': { id: 2, desc: 'd2', children: {} },
'3': { id: 3, desc: 'd3', children: {} } }
```
Upvotes: 1 <issue_comment>username_2: You could use a recursive function which generates an object out of the given items without mutating the original data.
```js
function getObjects(array) {
var object = {};
array.forEach(function (item) {
object[item.id] = Object.assign({}, item, { children: getObjects(item.children) });
});
return object;
}
var array = [{ id: 1, desc: 'd1', children: [{ id: 1.1, desc: 'd1.1', children: [] }] }, { id: 2, desc: 'd2', children: [] }, { id: 3, desc: 'd3', children: [] }];
console.log(getObjects(array));
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You could use `reduce()` method to create recursive function and return object object as a result.
```js
let array = [{id: 1, desc: 'd1', children:[{id:1.1, desc:'d1.1', children:[]}]}, {id:2, desc:'d2', children:[] }, {id:3, desc:'d3', children:[] }]
function build(data) {
return data.reduce(function(r, {id, desc, children}) {
const e = {id, desc}
if(children && children.length) e.children = build(children);
r[id] = e
return r;
}, {})
}
const result = build(array)
console.log(result)
```
Upvotes: 1
|
2018/03/15
| 450 | 1,982 |
<issue_start>username_0: is it currently possible to set up a whole cloudwatch stack including the cloudwatch agent via cloudformation ? I cant find a proper documentation and asking myself if its even possible.<issue_comment>username_1: Yes these types are available in CloudFormation
* AWS::CloudWatch::Alarm
* AWS::CloudWatch::Dashboard
Additionally, detailed monitoring can be set in other resource types (for example AWS::EC2::Instance)
Installing the Cloudwatch log agent would be done by configuring it in the AMI or installing as an action in the user data script
Upvotes: 2 <issue_comment>username_2: The following CloudFormation Resource creates a policy that will allow instances with this policy attached to their role to ship logs to CloudWatch:
```
"CloudWatchLogsPolicy": {
"Type" : "AWS::IAM::Policy",
"Properties" : {
"PolicyDocument" : {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams"
],
"Resource": arn:aws:logs:eu-west-1:123456789012:log-group:my-log-group:*
}
]
}
,
"PolicyName" : "CWLogPolicy",
"Roles": [{ "Ref": "IAMRole"}]
},
"DependsOn": ["IAMRole"]
}
```
You will need to update the Resource ARN to match your region, account id and log group name. The "Roles" and "DependsOn" assume there is an IAM role declared called "IAMRole" in the current stack.
When attaching a Role you have to use an AWS::IAM::InstanceProfile to create the link between the AWS::IAM::Role and the instance (or Autoscale group in my case).
Upvotes: 0
|
2018/03/15
| 928 | 3,028 |
<issue_start>username_0: We started getting the LPX-00209 problem when we recently upgraded our database system from Oracle llg to 12c. I'm trying to find the solution to why I'm now getting the error.
I have found what maybe the problem below. We moved to Oracle 12c and found that `l_xml.transform(XMLTYPE (l_xslt))` no longer works in later version of Oracle. Below is my procedure, trying to transform the XML using the XSL style sheet. Is there another function I can use instead of the l\_xml.transform(XMLTYPE (l\_xslt)). Once it's transformed the XML it then passes it back out and try's to then put this XMl into a clob using `p_resulting_xml.getclobval()` and pass to the procedure **write\_file\_email**.
Oracle support documentation below explaining the problem.
<https://support.oracle.com/knowledge/Oracle%20Database%20Products/1642080_1.html>
```
PROCEDURE end_workbook(p_report_clob IN OUT CLOB,
p_xml IN OUT XMLTYPE)
IS
l_xslt CLOB;
l_xml XMLTYPE;
BEGIN
Dbms_Lob.Writeappend(p_report_clob, 13, '');
l_xml := XMLTYPE(p_report_clob,NULL,0,1);
Dbms_Lob.Freetemporary (p_report_clob);
l_xslt := load_file('EXT_XSL_IN_DIR', 'ndu_sfich_report.xsl');
p_xml := l_xml.transform(XMLTYPE (l_xslt));
END end_workbook;
PROCEDURE write_file_email(p_filename IN VARCHAR2
,p_resulting_xml IN XMLTYPE
,p_first_visible_worksheet IN PLS_INTEGER DEFAULT 0)
IS
BEGIN
write_file (p_dir => pb_gen_report_dir -- VARCHAR2
,p_filename => p_filename -- VARCHAR2
,p_file => p_resulting_xml.getclobval() -- CLOB
,p_openmode => 'W' -- VARCHAR2
,p_first_visible_worksheet => p_first_visible_worksheet); --PLS_INTEGER
```
Error message below
```
15:02:40 Error: ORA-31011: XML parsing failed
ORA-19213: error occurred in XML processing at lines 1
LPX-00209: PI names starting with XML are reserved
ORA-06512: at "SYS.XMLTYPE", line 138
ORA-06512: at "PRBLK.NDU_REPORTING", line 330
ORA-06512: at "PRBLK.NDU_SFICH_REPORTING", line 1299 ORA-06512: at line 1
```<issue_comment>username_1: Make sure that your generated XML has no more than one XML declaration (`xml version="1.0" encoding="utf-8"?`) and, if it has one, that it only appear at the very top of the XML document. Your error message sounds as if a second XML declaration is being encountered and is being interpreted as a PI.
Upvotes: 2 <issue_comment>username_2: The l\_xml.transform doesn't work in later versions of oracle (i.e. 12c) unless you get a patch from Oracle.
The work around is below. You can either put this as an execute immediate statement in PL/SQL or in SQL near to where the error occurs.
```
ALTER SESSION SET sql_trace = true;
ALTER SESSION SET EVENTS='31151 trace name context forever, level 0x40000';
```
thanks Shaun
Upvotes: 1 [selected_answer]
|
2018/03/15
| 983 | 4,085 |
<issue_start>username_0: I am right now trying to send data from one activity to another and I now wonder why the int requestCode, int resultCode, gives me that the variables are never used plus why do they go red for that? it also gives me a reuired ; ? I am taking help of this tutorial <https://codelabs.developers.google.com/codelabs/android-room-with-a-view/index.html?index=..%2F..%2Findex#13> and I can't see where he adds the request code? maybe you guys can find it... thx for the help before hand
```
package com.example.jenso.paperseller;
import android.arch.lifecycle.Observer;
import android.arch.lifecycle.ViewModelProviders;
import android.content.Intent;
import android.os.Bundle;
import android.support.annotation.Nullable;
import android.support.design.widget.FloatingActionButton;
import android.support.v7.app.AppCompatActivity;
import android.support.v7.widget.LinearLayoutManager;
import android.support.v7.widget.RecyclerView;
import android.support.v7.widget.Toolbar;
import android.util.Log;
import android.view.View;
import android.widget.Toast;
import java.util.List;
import java.util.concurrent.Executor;
import java.util.concurrent.Executors;
public class MainActivity extends AppCompatActivity {
CustomerDatabase database;
FloatingActionButton fab;
private CustomerViewModel mCustomerViewModel;
final int NEW_CUSTOMER_ACTIVITY_REQUEST_CODE = 1;
private static final String TAG = "MainActivity";
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar);
setSupportActionBar(toolbar);
RecyclerView recyclerView = findViewById(R.id.recycler);
final PapperRecyclerAdapter adapter = new PapperRecyclerAdapter(this);
recyclerView.setAdapter(adapter);
recyclerView.setLayoutManager(new LinearLayoutManager(this));
mCustomerViewModel = ViewModelProviders.of(this).get(CustomerViewModel.class);
mCustomerViewModel.getmAllCustomers().observe(this, new Observer>() {
@Override
public void onChanged(@Nullable List customers) {
adapter.setCustomer(customers);
}
});
public void onActivityResult(int requestCode; int resultCode; Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == NEW\_CUSTOMER\_ACTIVITY\_REQUEST\_CODE && resultCode == RESULT\_OK) {
Customer customer = new Customer(data.getStringExtra(CreateCustomer.EXTRA\_REPLY), data.getStringExtra(CreateCustomer.SECOND\_REPLY),data.getStringExtra(CreateCustomer.THIRD\_REPLY),data.getStringExtra(CreateCustomer.FOURTH\_REPLY));
mCustomerViewModel.insert(customer);
} else {
Toast.makeText(
getApplicationContext(),
R.string.empty\_not\_saved,
Toast.LENGTH\_LONG).show();
}
}
recyclerView.setAdapter(adapter);
fab = findViewById(R.id.fab);
fab.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Log.d(TAG, "onClick: Do this when you click");
Intent intent = new Intent(MainActivity.this, CreateCustomer.class);
startActivityForResult(intent, NEW\_CUSTOMER\_ACTIVITY\_REQUEST\_CODE);
}
});
}
}
```<issue_comment>username_1: It's simple if you want to get some result from next activity you need to use `startActivityForResult(intent,requestCode );`
The **Request code** is basically the id of sender for example you may have two buttons that are calling two different activites to get result from them in this case if you use same requestCode you will not understand which activity is sending you data back. To distinguish between both you use different requestCodes.
Now in onActivityResult you first check the requestCode and then check ResultCode.
**ResultCode** is indicator whether the acitvity has provided you ok result or not.
Upvotes: 0 <issue_comment>username_2: It seems that you have written onActivityResult method inside onCreate method, please write it outside the onCreate method.
Hope this works..!
Upvotes: 2 [selected_answer]
|
2018/03/15
| 303 | 1,258 |
<issue_start>username_0: I have an array which contains the FileNames as New1, New2.... etc
I'm trying to code a function which returns me the new file name which is not present in the array and is the next consecutive number.
Let's say I have array as
`Let array = [{"Name" : "New"},{"Name" : "New1"},{"Name" : "New3"}]`
Then I want next new file name to be **New2**
How can I do this in java script?<issue_comment>username_1: It's simple if you want to get some result from next activity you need to use `startActivityForResult(intent,requestCode );`
The **Request code** is basically the id of sender for example you may have two buttons that are calling two different activites to get result from them in this case if you use same requestCode you will not understand which activity is sending you data back. To distinguish between both you use different requestCodes.
Now in onActivityResult you first check the requestCode and then check ResultCode.
**ResultCode** is indicator whether the acitvity has provided you ok result or not.
Upvotes: 0 <issue_comment>username_2: It seems that you have written onActivityResult method inside onCreate method, please write it outside the onCreate method.
Hope this works..!
Upvotes: 2 [selected_answer]
|
2018/03/15
| 901 | 2,952 |
<issue_start>username_0: I want to define a class like this:
```
class MyClass[C,X](
val states: C,
val transform: C => X
)
```
But `X` can only be equals to `C` or a container for `C`, like `List[C]` or `Set[C]` -- it does not make sense for the problem at hand if `X` is defines as anything else.
Is there a way to impose this restriction in Scala?<issue_comment>username_1: **EDIT**
Assuming XY-problem. Now this posting has two answers:
1. The cumbersome solution that relies on subclassing
2. Typeclass-based solution that probably solves the actual `X`
---
**Subclassing-solution**
If you run
```
List(List(0), Set(0))
```
in the interpreter, you will see that `List` and `Set` unify only at `Iterable`.
Thus, the most specific restriction you can make is:
```
import scala.language.higherKinds
class MyClass[C, Container[X] <: collection.immutable.Iterable[X]] (
val states: C,
val transform: C => Container[C]
)
```
I wouldn't advise to make it *this* generic though. If in doubt, take `List` first, generalize later only if it's *actually* necessary.
---
**Typeclass-solution**
From your comment, it looks as if it's an XY-problem, and what you actually want is a type constructor `F` with an appropriate type-class.
Define your type-class first, for example with the constraint that one sholud be able to iterate through `F`:
```
trait MyTC[F[_]] {
def hasIterator[X](fx: F[X]): Iterator[X]
}
```
Then define `MyClass` for arbitrary `F` for which there is an instance of the typeclass:
```
class MyClass[X, F[_] : MyTC] {
val states: X
val transform: X => F[X]
}
```
Then simply define instances of `MyTC` for `List`, `Set`, or `Id`:
```
implicit object ListMyTC extends MyTC[List] {
def hasIterator[X](lx: List[X]): Iterator[X] = lx.iterator
}
type Id[X] = X
implicit object ListMyTC extends MyTC[Id] {
def hasIterator[X](x: X): Iterator[X] = Iterator(x)
}
```
And then you can use `MyClass` with `List`s or with `Id`:
```
val m1 = new MyClass[Int, List] { ... } // Integer states, transforms to List
val m2 = new MyClass[String, Id] { ... } // String-labeled states, transforms to other `String`
```
etc.
The idea is essentially to replace your attempt to extensionally enumerate all the container types for which you think your construction might work by an intensional definition, that accepts any `F` that can prove that it satisfies all the requirements in `MyTC` by providing an instance of `MyTC[F]`.
Upvotes: 1 <issue_comment>username_2: You can try
```
import scala.language.higherKinds
class MyClass[C, F[_]](
val states: C,
val transform: C => F[C]
)
type Id[A] = A
new MyClass[Int, Set](1, Set(_)) // Set[Int] is a container for Int
new MyClass[String, List]("a", List(_)) // List[String] is a container for String
new MyClass[Boolean, Id](true, x => x) // Id[Boolean] is Boolean itself
```
Upvotes: 2
|
2018/03/15
| 283 | 1,008 |
<issue_start>username_0: My Android project in Xamarin keeps failing to build due to below error
invalid symbol :"default" default.png res/drawable/default.png
not sure what is causing this to happen.
Any help will be greatly appreciated.
Thanks
Ravi<issue_comment>username_1: [default](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/default) is c# keyword, you need rename your picture's name.
Upvotes: 3 <issue_comment>username_2: As mentioned before default is the keywords and that why it's failing . I wanted to share the lists so you may save your time by not using other keyword . We have quite a large number of keywords in c#:
Please find the few list mentioned in link below:
<http://azuliadesigns.com/list-keywords-reserved-words/>
<https://msdn.microsoft.com/en-us/library/x53a06bb(v=vs.120).aspx>
<http://www.dotnetfunda.com/codes/show/945/list-of-csharp-reserved-keyword>
So if u use any of the reserved names u may come across this kind of error.
Upvotes: 1
|
2018/03/15
| 355 | 1,120 |
<issue_start>username_0: I wrote a program with c++ and OpenCV 3.4.0 for connected components labeling.
I used `ConnectedComponentsWithStats` function for it. Now I can write same program with OpenCV + cuda. But OpenCV does not have `ConnectedComponentsWithStats` function for cuda.
Somebody said to me that I must use `labelComponents` function for it, but when I write `cv::cuda::labelComponents`, C++ say to me :
`"cv::cuda::" has no member "labelComponents"`<issue_comment>username_1: It is indeed in [`cv::cuda:labelComponents`](https://docs.opencv.org/3.4.0/d5/dc3/group__cudalegacy.html#ga92b4e167cd92db8a9e62e7c2450e4363)
1. Did you compile with legacy support?
2. Did you include the appropriate header file? I believe it is `"opencv2/cudalegacy/cudalegacy.hpp"` See: [cudalegacy.hpp File Reference](https://docs.opencv.org/3.4.0/da/d10/cudalegacy_8hpp.html)
Upvotes: 2 <issue_comment>username_2: I solved my problem with these two steps :
1) Project->Build Dependencies->Build Customisazions...-> Tick CUDA 9.1
2) Project->Properties->Linker->Input->Additional Dependencies-> "cudart.lib"
Upvotes: -1
|
2018/03/15
| 700 | 2,799 |
<issue_start>username_0: I am applying toasterjs in my angular component. Here is my code.
i have imported the `ToastrService` using the code:
```js
import { ToastrService } from 'ngx-toastr';
```
and inject that service in the constructor
```js
constructor(private toastrService: ToastrService) {
}
this.toastrService.success('You are awesome!', 'Success!',)
```
But it is giving the following error:
>
> Can't resolve all parameters for ToastrService: (?, [object Object], [object Object], [object Object], [object Object])
>
>
>
When I go to its definition, then it is taking 4 parameters. Here are the parameters:
```js
success(message?: string, title?: string, override?: Partial): ActiveToast;
```
I have added the module also:
```js
imports: [
ToastrModule.forRoot(),
CommonModule,
HttpModule,
FormsModule,
ReactiveFormsModule,
TextboxModuleShared,
DateTimeModuleShared,
DropdownModuleShared
],
```
So how can I use the parameters because I want to only show success message?<issue_comment>username_1: You have imported the `ToastrModule` in your app's NgModule in the incorrect order. It should be imported in the order as indicated by the [documentation](https://github.com/scttcper/ngx-toastr):
```js
@NgModule({
imports: [
CommonModule,
BrowserAnimationsModule, // required animations module
ToastrModule.forRoot(), // ToastrModule added
],
bootstrap: [App],
declarations: [App],
})
class MainModule {}
```
You should also replace `CommonModule` with `BrowserModule` if this app is running in the browser as indicated [here](https://angular.io/guide/ngmodule-faq#should-i-import-browsermodule-or-commonmodule).
Note that the error doesn't occur while calling the `success` function, but while invoking the constructor of the `ToastrService`. Your error message clearly states that not all parameters for the ToastrService constructor can be resolved. Some of these parameters cannot be resolved, because no instances for those parameters have been created yet. To understand how this works, you should read the [documentation](https://angular.io/guide/dependency-injection) on dependency injection within the Angular framework.
Upvotes: 1 <issue_comment>username_2: **Point to Consider :**
I was facing same issue. So here how I get solution
* After installing 'ngx-toastr' Package and updating angular.json file with 'toastr.css
* angular should be re-run by using ng-serve command, because app is not rendered by changing angular.json file
Upvotes: 0 <issue_comment>username_3: Try to import 'Inject' decorator and then decorate parameter in constructor and as an argument you should add ToastrService like this:
`constructor(@Inject(ToastrService) private toastyService: ToastrService ) { }`
Upvotes: 0
|
2018/03/15
| 983 | 3,652 |
<issue_start>username_0: I am trying to solve this problem for hours now and can't find a proper solution:
I am trying to Play a media File (Mp3/Ogg) but always get the FileNotFoundException (and I'm sure it's there ;) )
This is what I try:
Check if SD Card is available.
Check if reading/writing permissions are granted.
Load/Play Song.
```
if(isExternalStorageReadable() && isExternalStorageWritable())
if (ContextCompat.checkSelfPermission(this, Manifest.permission.READ_EXTERNAL_STORAGE) == PackageManager.PERMISSION_GRANTED) {
Log.v("Permission","Readingpermission is granted");
}
if (ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE) == PackageManager.PERMISSION_GRANTED) {
Log.v("Permission","Writingpermission is granted");
}
playSong(getExternalFilesDir(Environment.DIRECTORY_MUSIC).toString() + "/t1.mp3");
```
That part of the code is working so far.
But as soon as it comes to the playSong(String) :
```
try {
MediaPlayer mediaplayer = new MediaPlayer();
mediaplayer.setDataSource(path);
mediaplayer.prepare();
mediaplayer.start();
} catch (Exception e){
Log.e("Mediaplayer", e.toString());
}
```
The program crashes trying to setDataSource with this Exception:
```
E/Mediaplayer: java.io.FileNotFoundException: /storage/emulated/0/Android/data/OMSclient.omsgui/files/Music/t1.mp3
```
What am I missing? I'm clueless.
---
Alright, I've found the following problem:
If I simply execute
```
playSong("storage/17E5-1C14/Android/data/OMSclient.omsgui/files/Music/t1.ogg");
```
It works fine, but if I Use:
```
playSong("/storage/emulated/0/Android/data/OMSclient.omsgui/files/Music/t1.mp3");
```
or
```
playSong("storage/emulated/0/Android/data/OMSclient.omsgui/files/Music/t1.mp3");
```
It does not.
Why? ...<issue_comment>username_1: You have imported the `ToastrModule` in your app's NgModule in the incorrect order. It should be imported in the order as indicated by the [documentation](https://github.com/scttcper/ngx-toastr):
```js
@NgModule({
imports: [
CommonModule,
BrowserAnimationsModule, // required animations module
ToastrModule.forRoot(), // ToastrModule added
],
bootstrap: [App],
declarations: [App],
})
class MainModule {}
```
You should also replace `CommonModule` with `BrowserModule` if this app is running in the browser as indicated [here](https://angular.io/guide/ngmodule-faq#should-i-import-browsermodule-or-commonmodule).
Note that the error doesn't occur while calling the `success` function, but while invoking the constructor of the `ToastrService`. Your error message clearly states that not all parameters for the ToastrService constructor can be resolved. Some of these parameters cannot be resolved, because no instances for those parameters have been created yet. To understand how this works, you should read the [documentation](https://angular.io/guide/dependency-injection) on dependency injection within the Angular framework.
Upvotes: 1 <issue_comment>username_2: **Point to Consider :**
I was facing same issue. So here how I get solution
* After installing 'ngx-toastr' Package and updating angular.json file with 'toastr.css
* angular should be re-run by using ng-serve command, because app is not rendered by changing angular.json file
Upvotes: 0 <issue_comment>username_3: Try to import 'Inject' decorator and then decorate parameter in constructor and as an argument you should add ToastrService like this:
`constructor(@Inject(ToastrService) private toastyService: ToastrService ) { }`
Upvotes: 0
|
2018/03/15
| 324 | 1,069 |
<issue_start>username_0: I'm looking to retrieve all but one value from a list:
```
ll = ['a','b','c']
nob = [x for x in ll if x !='b']
```
Is there any simpler, more pythonic way to do this, with sets perhaps?<issue_comment>username_1: given that the element is unique in the list, you can use `list.index`
```
i = l.index('b')
l = ll[:i] +ll[i+1:]
```
another possibility is to use `list.remove`
```
ll.remove('b') #notice that ll will change underneath here
```
whatever you do, you'll always have to step through the list and compare each element, which gets slow for long lists. However, using the index, you'll get the index of the first matching element and can operate with this alone, thus avoiding to step through the remainder of the list.
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
list_ = ['a', 'b', 'c']
list_.pop(1)
```
You can also use .pop, and pass the index column, or name, that you want to pop from the list. When you print the list you will see that it stores ['a', 'c'] and 'b' has been "popped" from it.
Upvotes: 0
|
2018/03/15
| 481 | 1,867 |
<issue_start>username_0: While the Microsoft Graph API seems to be very complete feature wise, it seems like I am stuck at a fairly easy request. For a small web application I want to list apps that are registered in Azure. What a want to do with them is a little bit out of scope, but in the end I want to show the user some important applications (which we flag in some way - using tags or something like that) that the user has access to.
Now, using the /applications resource in the beta endpoint of the Graph API I can retrieve a list of applications. Now, the application does not need admin consent. When requesting the apps, it retrieves all registered apps, which is a bit odd I think. Why would it return all apps and not just the ones that are assigned to me?
But okay, lets move on. Now I have the list of apps (or the metadata of it). How can I determine if the signed-in user has access to this application (or it doesn't require assignment). Am I missing something or is this nowhere to be found?<issue_comment>username_1: given that the element is unique in the list, you can use `list.index`
```
i = l.index('b')
l = ll[:i] +ll[i+1:]
```
another possibility is to use `list.remove`
```
ll.remove('b') #notice that ll will change underneath here
```
whatever you do, you'll always have to step through the list and compare each element, which gets slow for long lists. However, using the index, you'll get the index of the first matching element and can operate with this alone, thus avoiding to step through the remainder of the list.
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
list_ = ['a', 'b', 'c']
list_.pop(1)
```
You can also use .pop, and pass the index column, or name, that you want to pop from the list. When you print the list you will see that it stores ['a', 'c'] and 'b' has been "popped" from it.
Upvotes: 0
|
2018/03/15
| 371 | 1,284 |
<issue_start>username_0: I would like to automatically adjust the width of a bokeh DataTable to the size of a screen. But I do not find the right way to do it. This line of code is supposed to work :
`data_table = DataTable(source=source, columns=columns, height = 300, sizing_mode = 'scale_width', fit_columns=True)`
But it does not. My Datatable keeps the same width.
Does anyone know how to solve this problem ?
Thank you.<issue_comment>username_1: given that the element is unique in the list, you can use `list.index`
```
i = l.index('b')
l = ll[:i] +ll[i+1:]
```
another possibility is to use `list.remove`
```
ll.remove('b') #notice that ll will change underneath here
```
whatever you do, you'll always have to step through the list and compare each element, which gets slow for long lists. However, using the index, you'll get the index of the first matching element and can operate with this alone, thus avoiding to step through the remainder of the list.
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
list_ = ['a', 'b', 'c']
list_.pop(1)
```
You can also use .pop, and pass the index column, or name, that you want to pop from the list. When you print the list you will see that it stores ['a', 'c'] and 'b' has been "popped" from it.
Upvotes: 0
|
2018/03/15
| 444 | 1,546 |
<issue_start>username_0: I am currently working with a MS SQL database on Windows 2012 Server
I need to query only 1 column from a table that I only have access to read, not make any kind of changes.
Problem is that the name of the column is "Value"
My code is this:
```
SELECT 'Value' FROM table
```
If I add
```
`ORDER BY 'Value'`
```
The issue is that the query is returning an empty list of results.
**Things I've tried already**
* I tried replacing `'` *with* `"' but this didn't work either.
* I also tried writing `SELECT *` instead of `SELECT VALUE`
* Using the table name in the SELECT or ORDER clauses again didn't help<issue_comment>username_1: You are claiming that this query:
```
SELECT 'Value'
FROM table
ORDER BY 'Value'
```
Is returning no rows. That's not quite correct. It is returning an error because SQL Server does not allow constant expressions as keys for `ORDER BY` (or `GROUP BY` for that matter).
Do not use single quotes. In this case:
```
SELECT 'Value' as val
FROM table
ORDER BY val;
```
Or, if `value` is a column in the table:
```
SELECT t.Value
FROM table t
ORDER BY t.Value;
```
`Value` is not a reserved word in SQL Server, but if it were, you could escape it:
```
SELECT t.[Value]
FROM table t
ORDER BY t.[Value];
```
Upvotes: 2 <issue_comment>username_2: it looks like your table has null values. and because of the order by all null values come first.
try to add filter like this
```
select Value FROM table
where Value is not null and Value <> ''
order by Value
```
Upvotes: 0
|
2018/03/15
| 1,152 | 3,579 |
<issue_start>username_0: Use win\_update module of ansible
Playbook:
```
- name: Update
tasks:
- name: Check updates
register: result
win_updates:
state: searched
- debug: var=result
hosts: windows
```
Hosts:
```
all:
children:
windows:
hosts:
somehost:
```
Group\_vars:
```
ansible_user: someuser
ansible_password: <PASSWORD>
ansible_connection: winrm
ansible_port: 5986
ansible_winrm_server_cert_validation: ignore
```
in debug get results:
```
ok: [somehost] => {
"result": {
"changed": false,
"failed": false,
"filtered_updates": {},
"found_update_count": 2,
"installed_update_count": 0,
"reboot_required": false,
"updates": {
"8fde14d1-2fd6-4705-b2ab-b2aaf1aa7a05": {
"id": "8fde14d1-2fd6-4705-b2ab-b2aaf1aa7a05",
"installed": false,
"kb": [
"4054518"
],
"title": "��������� ����� ��ࠢ����� ����⢠ ��⥬� ������᭮�� ��� ��⥬ Windows 7 �� ���� ����� x64 (KB4054518), 12 2017 �."
},
"bc3e1d56-c863-467e-a13d-77460eff0dcc": {
"id": "bc3e1d56-c863-467e-a13d-77460eff0dcc",
"installed": false,
"kb": [
"890830"
],
"title": "�।�⢮ 㤠����� �।������ �ணࠬ� ��� ������� x64: ���� 2018 �. (KB890830)"
}
}
}
}
```
where should I make changes of code for normal windows update title names?
win\_update powershell script or winrm or somewhere else
**update:**
Unfortunately when I use stable version of ansible (2.4.3.0), get error:
```
ansible win10.dev -i hosts -m win_updates -a 'state=searched'
win10.dev | FAILED! => {
"changed": false,
"module_stderr": "An error occurred while creating the pipeline.\r\n + CategoryInfo : NotSpecified: (:) [], ParentContainsErrorRecordException\r\n + FullyQualifiedErrorId : RuntimeException\r\n \r\nTimed out waiting for scheduled task to start\r\nAt line:334 char:9\r\n+ Throw \"Timed out waiting for scheduled task to start\"\r\n+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n + CategoryInfo : OperationStopped: (Timed out waiti...d task to start:String) [], RuntimeException\r\n + FullyQualifiedErrorId : Timed out waiting for scheduled task to start\r\n \r\n\r\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 1
}
```
In this thread(<https://github.com/ansible/ansible/issues/25298>) I did not found decision, so I have to use @devel branch of ansible repository.<issue_comment>username_1: You are claiming that this query:
```
SELECT 'Value'
FROM table
ORDER BY 'Value'
```
Is returning no rows. That's not quite correct. It is returning an error because SQL Server does not allow constant expressions as keys for `ORDER BY` (or `GROUP BY` for that matter).
Do not use single quotes. In this case:
```
SELECT 'Value' as val
FROM table
ORDER BY val;
```
Or, if `value` is a column in the table:
```
SELECT t.Value
FROM table t
ORDER BY t.Value;
```
`Value` is not a reserved word in SQL Server, but if it were, you could escape it:
```
SELECT t.[Value]
FROM table t
ORDER BY t.[Value];
```
Upvotes: 2 <issue_comment>username_2: it looks like your table has null values. and because of the order by all null values come first.
try to add filter like this
```
select Value FROM table
where Value is not null and Value <> ''
order by Value
```
Upvotes: 0
|
2018/03/15
| 380 | 1,385 |
<issue_start>username_0: Again I am here with an error, i have connected my PHP with SQL server from an IP Address and it works properly in my local PC. but when i transfer my source codes to my client PC it gives below error:
**A Database Error Occurred**
Unable to connect to your database server using the provided settings.
Filename: C:/xampp/htdocs/SAT\_POS/system/database/DB\_driver.php
Line Number: 436
----------------
but actualy i have included the extension also and set in php.ini, see below:
extension=php\_sqlsrv\_56\_ts.dll
extension=php\_sqlsrv\_54\_ts.dll
the above two lines are included in the php.ini file and the extension directory is also the default one, see below:
extension\_dir="C:\xampp\php\ext"
and in CI database code I configured like below :<issue_comment>username_1: You have to pick ONE SQL Server database extension ONLY.
You have also loaded one extension that is compiled for PHP 5.4
```
php_sqlsrv_54_ts.dll
```
AND one that is compiled for PHP 5.6
```
php_sqlsrv_56_ts.dll
```
Pick the ONE extension that matches your currently runnning PHP version
Also did you load the SQLServer client software on your clients PC.
Upvotes: 2 <issue_comment>username_2: At last with the help of the stack genius who commented on this I have done the work!
Just installed ODBC DRIVER for SQL SERVER on the windows!
Upvotes: 1 [selected_answer]
|
2018/03/15
| 355 | 1,312 |
<issue_start>username_0: ```
@RequestMapping("/api/")
public String h(String title){
System.out.println(title);
return "view";
}
```
.
```
@RequestMapping("/api/")
public String h(@RequestParam String title){
System.out.println(title);
return "view";
}
```
When accessing /api?title=something, both of them work as expected. So what is the difference between them?<issue_comment>username_1: Yes, it works only if the requestParam sent matches with your variable name.
Try changing it like this
```
@RequestMapping("/api/")
public String h(@RequestParam String myTitle){
System.out.println(myTitle);
return "view";
}
```
and see what happens. `myTitle` will be null.
On the other hand, If you use `@RequestParam` it can still work with different variable name.
Try this.
```
@RequestMapping("/api/")
public String h(@RequestParam("title") String myTitle){
System.out.println(myTitle);
return "view";
}
```
Another thing is you can set `@RequestParam(required = false)`, if you want some params as optional.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can try code like that:
`@RequestMapping(value = "/api")
public String h(@RequestParam(value = "title", required = false) String title) {
System.out.println(title);
return "view";
}`
Upvotes: 0
|
2018/03/15
| 290 | 987 |
<issue_start>username_0: I'm trying to use dynamic font size on a `UIButton`. But if I increase the font size, the text on the button gets truncated to "...".
Any idea on how to fix this?
Thanks<issue_comment>username_1: Use a constraint with height greater than or equal
Upvotes: 0 <issue_comment>username_2: You can get the [UILabel](https://developer.apple.com/documentation/uikit/uilabel) of `UIButton` by
```
let btn = UIButton() ....
if let label = btn.textLabel {
label.adjustsFontSizeToFitWidth = true // Adjust font size automatically
label.minimumScaleFactor = 0.5 //< The minimum font size scale factor
}
```
### Reference
1. [adjustsFontSizeToFitWidth property](https://developer.apple.com/documentation/uikit/uilabel/1620546-adjustsfontsizetofitwidth)
2. [minimumscalefactor property](https://developer.apple.com/documentation/uikit/uilabel/1620544-minimumscalefactor)
3. [UILabel reference](https://developer.apple.com/documentation/uikit/uilabel)
Upvotes: 2
|
2018/03/15
| 676 | 2,489 |
<issue_start>username_0: I have created a K-means training job with a csv file that I have stored in S3. After a while I receive the following error:
```
Training failed with the following error: ClientError: Rows 1-5000 in file /opt/ml/input/data/train/features have more fields than than expected size 3.
```
What could be the issue with my file?
Here are the parameters I am passing to sagemaker.create\_training\_job
```
TrainingJobName=job_name,
HyperParameters={
'k': '2',
'feature_dim': '2'
},
AlgorithmSpecification={
'TrainingImage': image,
'TrainingInputMode': 'File'
},
RoleArn='arn:aws:iam:::role/MyRole',
OutputDataConfig={
"S3OutputPath": output\_location
},
ResourceConfig={
'InstanceType': 'ml.m4.xlarge',
'InstanceCount': 1,
'VolumeSizeInGB': 20,
},
InputDataConfig=[
{
'ChannelName': 'train',
'ContentType': 'text/csv',
"CompressionType": "None",
"RecordWrapperType": "None",
'DataSource': {
'S3DataSource': {
'S3DataType': 'S3Prefix',
'S3Uri': data\_location,
'S3DataDistributionType': 'FullyReplicated'
}
}
}
],
StoppingCondition={
'MaxRuntimeInSeconds': 600
}
```<issue_comment>username_1: I've seen this issue appear when doing unsupervised learning, such as the above example using clustering. If you have a csv input, you can also address this issue by setting `label_size=0` in the ContentType parameter of the Sagemaker API call, within the InputDataConfig branch.
Here's an example of what the relevant section of the call might look like:
```
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "some/path/in/s3",
"S3DataDistributionType": "ShardedByS3Key"
}
},
"CompressionType": "None",
"RecordWrapperType": "None",
"ContentType": "text/csv;label_size=0"
}
]
```
Upvotes: 2 <issue_comment>username_2: Make sure your .csv doesn't have column headers, and that the label is the first column.
Also make sure your values for the hyper-parameters are accurate ie feature\_dim means number of features in your set. If you give it the wrong value, it'll break.
Heres a list of sagemaker knn hyper-parameters and their meanings: <https://docs.aws.amazon.com/sagemaker/latest/dg/kNN_hyperparameters.html>
Upvotes: 0
|
2018/03/15
| 340 | 1,227 |
<issue_start>username_0: I use this text `text\r2`. And I want to print this in debug and get result:
```
text\r2
```
but I get this:
```
text
2
```<issue_comment>username_1: Try to escape the backslash with another backslash: `text\\r2`.
The `\r` will otherwise be interpreted as a line break.
Upvotes: 2 <issue_comment>username_2: `\r` in a String literal is a special character and represents a carriage return
See [Special Characters in String Literals](https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/StringsAndCharacters.html)
```
String literals can include the following special characters:
* The escaped special characters \0 (null character), \\ (backslash), \t (horizontal tab), \n (line feed), \r (carriage return), \" (double quotation mark) and \' (single quotation mark)
* An arbitrary Unicode scalar, written as \u{n}, where n is a 1–8 digit hexadecimal number with a value equal to a valid Unicode code point (Unicode is discussed in Unicode below)
```
If you want to use in a String Literal a backslash you have to escape it using `\\`.
So you'll have to write
```
print("text\\r2")
```
to get `text\r2`
Upvotes: 2 [selected_answer]
|
2018/03/15
| 666 | 2,083 |
<issue_start>username_0: I am playing around with binance API, im very new to javascript, the first section in my code
```
binance.prices((error, ticker) => {
console.log("prices()", ticker);
console.log("Price of BTC: ", ticker.BTCUSDT);
});
```
above code outputs:
```
ETHBTC: '0.07421500',
LTCBTC: '0.01994000',
BNBBTC: '0.00110540',
NEOBTC: '0.00853400',
QTUMETH: '0.02604400',
```
the code below runs a check on an selected key (GTOBTC), I cant seem to be able to create a loop which takes the name from the keys above.
```
binance.depth("GTOBTC", (error, depth, symbol) => {
a = 0;
b = 0;
for (value in depth.bids){
a += Number(value);
};
for (value in depth.asks){
b += Number(value);
};
var c = a - b;
var d = (c / a) * 100;
if (d >= 2.0){
console.log(symbol + " Percent ok");
console.log(d);
} else {
console.log(symbol + " percentage not sufficient");
}
})
```
output for code above:
```
GTOBTC percentage not sufficient
```
Any help would be great thanks.<issue_comment>username_1: Try to escape the backslash with another backslash: `text\\r2`.
The `\r` will otherwise be interpreted as a line break.
Upvotes: 2 <issue_comment>username_2: `\r` in a String literal is a special character and represents a carriage return
See [Special Characters in String Literals](https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/StringsAndCharacters.html)
```
String literals can include the following special characters:
* The escaped special characters \0 (null character), \\ (backslash), \t (horizontal tab), \n (line feed), \r (carriage return), \" (double quotation mark) and \' (single quotation mark)
* An arbitrary Unicode scalar, written as \u{n}, where n is a 1–8 digit hexadecimal number with a value equal to a valid Unicode code point (Unicode is discussed in Unicode below)
```
If you want to use in a String Literal a backslash you have to escape it using `\\`.
So you'll have to write
```
print("text\\r2")
```
to get `text\r2`
Upvotes: 2 [selected_answer]
|
2018/03/15
| 4,682 | 11,524 |
<issue_start>username_0: i am stuck at a problem. I have two 2-D numpy arrays, filled with x and y coordinates. Those arrays might look like:
```
array1([[(1.22, 5.64)],
[(2.31, 7.63)],
[(4.94, 4.15)]],
array2([[(1.23, 5.63)],
[(6.31, 10.63)],
[(2.32, 7.65)]],
```
Now I have to find "duplicate nodes". However, i also have to consider nodes as equal within a given tolerance of the coordinates, therefore, i can't use solutions like [this](https://stackoverflow.com/questions/8317022/get-intersecting-rows-across-two-2d-numpy-arrays) . Since my arrays are quite big (~200.000 lines each) two simple `for` loops are not an option as well. My final output should look like this:
```
output([[(1.23, 5.63)],
[(2.32, 7.65)]],
```
I would appreciate some hints.
Cheers,<issue_comment>username_1: In order to compare to nodes with a giving tolerance I recommend to use `numpy.isclose()`, where you can set a relative and absolute tolerance.
```
numpy.isclose(1.24, 1.25, atol=1e-1)
# [True]
numpy.isclose([1.24, 2.31], [1.25, 2.32], atol=1e-1)
# [True, True]
```
Instead of using a two `for` loops, you can make use of `itertools.product()` package, to go through all pairs. The following code does what you want:
```
array1 = np.array([[1.22, 5.64],
[2.31, 7.63],
[4.94, 4.15]])
array2 = np.array([[1.23, 5.63],
[6.31, 10.63],
[2.32, 7.64]])
output = np.empty((0,2))
for i0, i1 in itertools.product(np.arange(array1.shape[0]),
np.arange(array2.shape[0])):
if np.all(np.isclose(array1[i0], array2[i1], atol=1e-1)):
output = np.concatenate((output, [array2[i1]]), axis=0)
# output = [[ 1.23 5.63]
# [ 2.32 7.64]]
```
Upvotes: 2 <issue_comment>username_2: Defining a `isclose` function similar to `numpy.isclose`, [but a bit faster](https://stackoverflow.com/a/25962913/4042267) (mostly due to not checking any input and not supporting both relative and absolute tolerance):
```
import numpy as np
array1 = np.array([[(1.22, 5.64)],
[(2.31, 7.63)],
[(4.94, 4.15)]])
array2 = np.array([[(1.23, 5.63)],
[(6.31, 10.63)],
[(2.32, 7.65)]])
def isclose(x, y, atol):
return np.abs(x - y) < atol
```
Now comes the hard part. We need to calculate if any two values are close within the inner most dimension. For this I reshape the arrays in such a way that the first array has its values along the second dimension, replicated across the first and the second array has its values along the first dimension, replicated along the second (note the `1, 3` and `3, 1`):
```
In [92]: isclose(array1.reshape(1,3,2), array2.reshape(3,1,2), 0.03)
Out[92]:
array([[[ True, True],
[False, False],
[False, False]],
[[False, False],
[False, False],
[False, False]],
[[False, False],
[ True, True],
[False, False]]], dtype=bool)
```
Now we want all entries where the value is close to any other value (along the same dimension):
```
In [93]: isclose(array1.reshape(1,3,2), array2.reshape(3,1,2), 0.03).any(axis=0)
Out[93]:
array([[ True, True],
[ True, True],
[False, False]], dtype=bool)
```
Then we want only those where both values of the tuple are close:
```
In [111]: isclose(array1.reshape(1,3,2), array2.reshape(3,1,2), 0.03).any(axis=0).all(axis=-1)
Out[111]: array([ True, True, False], dtype=bool)
```
And finally, we can use this to index `array1`:
```
In [112]: array1[isclose(array1.reshape(1,3,2), array2.reshape(3,1,2), 0.03).any(axis=0).all(axis=-1)]
Out[112]:
array([[[ 1.22, 5.64]],
[[ 2.31, 7.63]]])
```
If you want to, you can swap the `any` and `all` calls. One might be faster than the other in your case.
The `3` in the `reshape` calls needs to be substituted for the actual length of your data.
This algorithm will have the same bad runtime of the other answer using `itertools.product`, but at least the actual looping is done implicitly by `numpy` and is implemented in C. This is visible in the timings:
```
In [122]: %timeit array1[isclose(array1.reshape(1,len(array1),2), array2.reshape(len(array2),1,2), 0.03).any(axis=0).all(axis=-1)]
11.6 µs ± 493 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [126]: %timeit pares(array1_pares, array2_pares)
267 µs ± 8.72 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
```
Where the `pares` function is the code defined by [@username_1](https://stackoverflow.com/users/5234494/ferran-par%C3%A9s) in [another answer](https://stackoverflow.com/a/49304571/4042267) and the arrays as already reshaped there.
And for larger arrays it becomes more obvious:
```
array1 = np.random.normal(0, 0.1, size=(1000, 1, 2))
array2 = np.random.normal(0, 0.1, size=(1000, 1, 2))
array1_pares = array1.reshape(1000, 2)
array2_pares = arra2.reshape(1000, 2)
In [149]: %timeit array1[isclose(array1.reshape(1,len(array1),2), array2.reshape(len(array2),1,2), 0.03).any(axis=0).all(axis=-1)]
135 µs ± 5.34 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [157]: %timeit pares(array1_pares, array2_pares)
1min 36s ± 6.85 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
In the end this is limited by the available system memory. My machine (16GB RAM) can still handle arrays of length 20000, but that pushes it almost to 100%. It also takes about 12s:
```
In [14]: array1 = np.random.normal(0, 0.1, size=(20000, 1, 2))
In [15]: array2 = np.random.normal(0, 0.1, size=(20000, 1, 2))
In [16]: %timeit array1[isclose(array1.reshape(1,len(array1),2), array2.reshape(len(array2),1,2), 0.03).any(axis=0).all(axis=-1)]
12.3 s ± 514 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
Upvotes: 2 <issue_comment>username_3: As commented, scaling and rounding your numbers might allow you to use `intersect1d` or the equivalent.
And if you have just 2 columns, it might work to turn it into a 1d array of complex dtype.
But you might also want to keep in mind what `intersect1d` does:
```
if not assume_unique:
# Might be faster than unique( intersect1d( ar1, ar2 ) )?
ar1 = unique(ar1)
ar2 = unique(ar2)
aux = np.concatenate((ar1, ar2))
aux.sort()
return aux[:-1][aux[1:] == aux[:-1]]
```
`unique` has been enhanced to handle rows (`axis` parameters), but intersect has not. In any case it uses `argsort` to put similar elements next to each other, and then skips the duplicates.
Notice that `intersect` concatenenates the unique arrays, sorts, and again finds the duplicates.
I know you didn't want a loop version, but to promote conceptualization of the problem here's one anyways:
```
In [581]: a = np.array([(1.22, 5.64),
...: (2.31, 7.63),
...: (4.94, 4.15)])
...:
...: b = np.array([(1.23, 5.63),
...: (6.31, 10.63),
...: (2.32, 7.65)])
...:
```
I removed a layer of nesting in your arrays.
```
In [582]: c = []
In [583]: for a1 in a:
...: for b1 in b:
...: if np.allclose(a1,b1, atol=0.5): c.append((a1,b1))
```
or as list comprehension
```
In [586]: [(a1,b1) for a1 in a for b1 in b if np.allclose(a1,b1,atol=0.5)]
Out[586]:
[(array([1.22, 5.64]), array([1.23, 5.63])),
(array([2.31, 7.63]), array([2.32, 7.65]))]
```
complex approximation
---------------------
```
In [604]: aa = (a*10).astype(int)
In [605]: aa
Out[605]:
array([[12, 56],
[23, 76],
[49, 41]])
In [606]: ac=aa[:,0]+1j*aa[:,1]
In [607]: bb = (b*10).astype(int)
In [608]: bc=bb[:,0]+1j*bb[:,1]
In [609]: np.intersect1d(ac,bc)
Out[609]: array([12.+56.j, 23.+76.j])
```
intersect inspired
------------------
Concatenate the arrays, sort them, take difference, and find the small differences:
```
In [616]: ab = np.concatenate((a,b),axis=0)
In [618]: np.lexsort(ab.T)
Out[618]: array([2, 3, 0, 1, 5, 4], dtype=int32)
In [619]: ab1 = ab[_,:]
In [620]: ab1
Out[620]:
array([[ 4.94, 4.15],
[ 1.23, 5.63],
[ 1.22, 5.64],
[ 2.31, 7.63],
[ 2.32, 7.65],
[ 6.31, 10.63]])
In [621]: ab1[1:]-ab1[:-1]
Out[621]:
array([[-3.71, 1.48],
[-0.01, 0.01],
[ 1.09, 1.99],
[ 0.01, 0.02],
[ 3.99, 2.98]])
In [623]: ((ab1[1:]-ab1[:-1])<.1).all(axis=1) # refine with abs
Out[623]: array([False, True, False, True, False])
In [626]: np.where(Out[623])
Out[626]: (array([1, 3], dtype=int32),)
In [627]: ab[_]
Out[627]:
array([[2.31, 7.63],
[1.23, 5.63]])
```
Upvotes: 0 <issue_comment>username_4: May be you could try this using pure NP and self defined function:
```
import numpy as np
#Your Example
xDA=np.array([[1.22, 5.64],[2.31, 7.63],[4.94, 4.15],[6.1,6.2]])
yDA=np.array([[1.23, 5.63],[6.31, 10.63],[2.32, 7.65],[3.1,9.2]])
###Try this large sample###
#xDA=np.round(np.random.uniform(1,2, size=(5000, 2)),2)
#yDA=np.round(np.random.uniform(1,2, size=(5000, 2)),2)
print(xDA)
print(yDA)
#Match x to y
def np_matrix(myx,myy,calp=0.2):
Xxx = np.transpose(np.repeat(myx[:, np.newaxis], myy.size, axis=1))
Yyy = np.repeat(myy[:, np.newaxis], myx.size, axis=1)
# define a caliper
matches = {}
dist = np.abs(Xxx - Yyy)
for m in range(0, myx.size):
if (np.min(dist[:, m]) <= calp) or not calp:
matches[m] = np.argmin(dist[:, m])
return matches
alwd_dist=0.1
xc1=xDA[:,1]
yc1=yDA[:,1]
m1=np_matrix(xc1,yc1,alwd_dist)
xc0=xDA[:,0]
yc0=yDA[:,0]
m0=np_matrix(xc0,yc0,alwd_dist)
shared_items = set(m1.items()) & set(m0.items())
if (int(len(shared_items))==0):
print("No Matched Items based on given allowed distance:",alwd_dist)
else:
print("Matched:")
for ke in shared_items:
print(xDA[ke[0]],yDA[ke[1]])
```
Upvotes: 0 <issue_comment>username_5: There are many possible ways to define that tolerance. Since, we are talking about XY coordinates, most probably we are talking about euclidean distances to set that tolerance value. So, we can use [`Cython-powered kd-tree` for quick nearest-neighbor lookup](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.query.html), which is very efficient both memory-wise and with performance. The implementation would look something like this -
```
from scipy.spatial import cKDTree
# Assuming a default tolerance value of 1 here
def intersect_close(a, b, tol=1):
# Get closest distances for each pt in b
dist = cKDTree(a).query(b, k=1)[0] # k=1 selects closest one neighbor
# Check the distances against the given tolerance value and
# thus filter out rows off b for the final output
return b[dist <= tol]
```
Sample step-by-step run -
```
# Input 2D arrays
In [68]: a
Out[68]:
array([[1.22, 5.64],
[2.31, 7.63],
[4.94, 4.15]])
In [69]: b
Out[69]:
array([[ 1.23, 5.63],
[ 6.31, 10.63],
[ 2.32, 7.65]])
# Get closest distances for each pt in b
In [70]: dist = cKDTree(a).query(b, k=1)[0]
In [71]: dist
Out[71]: array([0.01414214, 5. , 0.02236068])
# Mask of distances within the given tolerance
In [72]: tol = 1
In [73]: dist <= tol
Out[73]: array([ True, False, True])
# Finally filter out valid ones off b
In [74]: b[dist <= tol]
Out[74]:
array([[1.23, 5.63],
[2.32, 7.65]])
```
Timings on 200,000 pts -
```
In [20]: N = 200000
...: np.random.seed(0)
...: a = np.random.rand(N,2)
...: b = np.random.rand(N,2)
In [21]: %timeit intersect_close(a, b)
1 loop, best of 3: 1.37 s per loop
```
Upvotes: 2
|
2018/03/15
| 564 | 2,082 |
<issue_start>username_0: I made a header view file and in that header I'm trying to echo a picture from the folder images. This images folder is outside the application folder.
Also the name of the picture is logo.jpg
This is the code to echo the picture in the header.php file:
```

```
On another view page called home I'm including the header like this:
```
php include_once ('templates/header.php'); ?
```
When I load the home.php view page the picture does not load.
It does echo the alt of the picture: `alt="Jongeren kansrijker logo`
What am I doing wrong here and how can I echo the logo.jpg picture from the images folder?<issue_comment>username_1: use an absolute path with base\_url();
```
![]()
```
you can set your base\_url in the config.php and it should target the folder where your index.php is in (And I guess the image-folder is also there).
Upvotes: 1 <issue_comment>username_2: This usually means your `src` attribute is wrong. Remember that the value should be the path from the root of your website as it is managed as a separate request by the browser.
I imagine in your case you're actually looking for the following:
```

```
TLDR: The image is fetched by the browser so you shouldn't use the path from `header.php`.
Upvotes: 0 <issue_comment>username_3: First of all you should load url helper in the autoload.php which is in config folder as following.
```
$autoload['helper'] = array('url');
```
Second, you should use **$this->load->view ('templates/header');** instead of including files using **include\_once();**
Third, You should config your baseurl in config.php which is also in config folder as below(considering you are working in localhost)
```
$config['base_url'] = 'http://localhost/projectName/';
```
The above code takes you to the root of your project.
And the last point to view the image use the following code
```
 ?>logo.jpg)
```
Upvotes: 3 [selected_answer]
|
2018/03/15
| 482 | 1,520 |
<issue_start>username_0: Windows 10, Apache httpd 2.4, PHP 7.1.4, Laravel 5.5
Gmail's less secure is Allowed.
---
My .env file:
```
MAIL_DRIVER=smtp
MAIL_HOST=smtp.gmail.com
MAIL_PORT=465
MAIL_USERNAME=<EMAIL>
MAIL_PASSWORD=<PASSWORD>
MAIL_ENCRYPTION=ssl
```
Error Message:
>
> Connection could not be established with host smtp.gmail.com
>
>
>
---
```
MAIL_PORT=587
MAIL_ENCRYPTION=tls
```
Error Message:
>
> stream\_socket\_enable\_crypto(): SSL operation failed with code 1.
> OpenSSL Error messages:
> error:14090086:SSL routines:ssl3\_get\_server\_certificate:certificate verify failed
>
>
>
How can I resolve the "certificate verify failed" error?<issue_comment>username_1: If your operating system doesn't automatically manage your trusted certificate store:
1. Download [the cURL `cacert.pem` certificate bundle](https://curl.se/ca/cacert.pem)
2. Put the cacert.pem bundle somewhere you like; if you have self-signed certificates that need to be accepted, open the bundle in a text editor and add them to the end of the file.
3. Edit `php.ini` to reference this file location:
```none
curl.cainfo = D:/Servers/php/sslfiles/cacert.pem
openssl.cafile = D:/Servers/php/sslfiles/cacert.pem
```
4. Restart PHP-FPM or your web server, depending on how you are running PHP.
Upvotes: 4 [selected_answer]<issue_comment>username_2: When you receive ssl certificate verification error, You can set these two variables in .env file:
```
MAIL_PORT=587
MAIL_ENCRYPTION=tcp
```
Upvotes: -1
|
2018/03/15
| 602 | 2,048 |
<issue_start>username_0: When trying to get a popover displayed in one line using this template:
```
Here we go:
Show me popover with html
```
And by adding a *CSS class* like:
```
myClass {
white space: nowrap;
}
```
Then the text goes beyond the popup itself.
I've tried to add **container="body"** or **[container]="body"** as in the ngx-bootstrap documentation but it is not working
Is there any proper way to have the popover having all the content text inside it ?
This case happen when the content text is long.
I've looked at this answer:
[Ngx-Bootstrap popover not setting correct height or width](https://stackoverflow.com/questions/47840797/ngx-bootstrap-popover-not-setting-correct-height-or-width)
But it is not working.
**Update :**
Now, I'm able to see the popover in the debugger, by hovering the tooltip, the following code is added with the pseudo class `::before`
```
...
```
I don't know from where the value `310px` comes from, but it looks like ngx-bootstrap didn't respond to the provided CSS
Trying an inline CSS directly in the div of the template as:
```
style="display: inline-block; min-width: 100%; white-space: nowrap;"
```
Do not work too.
I found that even when you put an inline CSS, the resulting style is as above in the template as `style="display: block; top: -6px; left: 310px;"` which is very strange.<issue_comment>username_1: add `max-width: 100%;` to `myClass`
```
.myClass {
white-space: nowrap;
max-width: 100%;
}
```
Upvotes: 1 <issue_comment>username_2: Finaly I have found the right solution by using this CSS:
```
.popover {
white-space: nowrap;
max-width: none;
}
```
Because we can see at the generated html code for this popover (see the question), that there are 5 classes used, then I used one of them to apply the relevant CSS.
And I've to set `encapsulation` to [`ViewEncapsulation.None`](https://angular.io/api/core/ViewEncapsulation) in the popover custom component (which uses an input text, to be displayed)
Upvotes: 4 [selected_answer]
|
2018/03/15
| 613 | 2,309 |
<issue_start>username_0: **The environment**
I have a big component tree on my angular application with multiples routes outlet to display specific component on each level. the deepest level is a modal that manages certain info.
**The Problem**
I can block the interaction through the mouse from my child component to the parent component event if you can see it (the parent component )
but when I use the keyboard I'm able to navigate to the parent component and select options in all my parent component
**the question**
How can I prevent this behavior?<issue_comment>username_1: Angular CDK provides a directive called cdkTrapFocus which prevents focus moving outside a dom node and it's children. They use this in MatDialog, and it works great.
If you don't want to switch to using MatDialog or you need this in some other layout than a dialog, you might want to look into using cdkTrapFocus on it's own: <https://github.com/angular/material2/blob/3aceb7361cc34ad987f7b1ca39339d3203248341/src/cdk/a11y/focus-trap/focus-trap.ts#L340>
It should be as simple as importing and declaring the directive, then
Upvotes: 5 [selected_answer]<issue_comment>username_2: Well you could implement some crude event binding like this:
```js
document.onkeydown = checkKey;
function checkKey(e) {
e = e || window.event;
// tab, up, down, left, right
if (e.keyCode == '9' || e.keyCode == '38' || e.keyCode == '40' || e.keyCode == '37' || e.keyCode == '39') {
console.log("prevent");
e.stopImmediatePropagation();
e.preventDefault();
}
}
```
This will completely prevent use of the arrow keys on the page, as well as the tab key (tabbing between targets.)
---
**A note on accessibility**
One of the comments from @Ricardo states that this is a very poor approach for accessibility. I think it is important to remember that many people with accessibility issues will use a program like [Jaws](http://doccenter.freedomscientific.com/doccenter/archives/training/jawskeystrokes.htm) to navigate a web page. These programs capture the keystrokes outside of the web app and then propogate these actions through to the browser. Blocking **keydown** events will not inhibit this - JAWS specifically transmutes the users keydown events into **focus** events.
Upvotes: 2
|
2018/03/15
| 245 | 770 |
<issue_start>username_0: I am trying to get hostname from UDP endpoint. However I do not know whether boost.asio supports IP->hostname conversion. Anyone can answer my question?<issue_comment>username_1: [getnameinfo](https://linux.die.net/man/3/getnameinfo) is what you want.
```
getnameinfo((sockaddr*)&addr, sizeof(addr), hostname, sizeof(hostname), NULL, NULL, 0);
```
Upvotes: 1 <issue_comment>username_2: ```
#include
#include
#include
int main()
{
try
{
asio::io\_service io\_service;
asio::ip::udp::resolver rsv(io\_service);
for(const auto &ele : rsv.resolve(asio::ip::udp::endpoint(asio::ip::address\_v4({192,168,1,163}),0)))
{
std::cout<
```
I have found out how to get host name by ip. Just directly resolve endpoint by using ip address
Upvotes: -1
|
2018/03/15
| 639 | 1,812 |
<issue_start>username_0: With An example, I'm basically trying to go from :
```
[
{
'a':a1,
'b':b2
},
{
'a':a1,
'b':b5
},
{
'a':a3,
'b':b4
}
]
```
To :
```
[
{
'group':'a1',
'content':[
{
'a':a1,
'b':b2
},
{
'a':a1,
'b':b5
}
],
},
{
'group':'a3',
'content':[
{
'a':a3,
'b':b4
}
]
}
]
```
So in word reformat the array and group elements on an attribute, here `a`<issue_comment>username_1: There is a simple way by using lodash GroupBy
<https://lodash.com/docs#groupBy>
```
_.groupBy([
{
'a':"a1",
'b':"b2"
},
{
'a':"a1",
'b':"b5"
},
{
'a':"a3",
'b':"b4"
}
], "a")
```
The first arguments the array you want to group, an the second is the iterator you want to group, the result will be grouped in a array, it will not have the content preoperty.
```
{
"a1":[
{
"a":"a1",
"b":"b2"
},
{
"a":"a1",
"b":"b5"
}
],
"a3":[
{
"a":"a3",
"b":"b4"
}
]
}
```
See if this helps get you going, if not, let me know
Upvotes: 1 <issue_comment>username_2: If you only want that grouped array then you can achieve using reducer.
```
`let group = data.reduce((r, a) => {
r[a.a] = [...r[a.a] || [], a];
return r;
}, {});`
```
```js
var data = [
{
'a':'a1',
'b':'b2'
},
{
'a':'a1',
'b':'b5'
},
{
'a':'a3',
'b':'b4'
}
]
let group = data.reduce((r, a) => {
r[a.a] = [...r[a.a] || [], a];
return r;
}, {});
console.log("group", group);
```
Upvotes: 0
|
2018/03/15
| 387 | 1,305 |
<issue_start>username_0: If I wish to run my function app at 11.15 PM to next day 1.15 AM(That mean MONDAY 11.15 PM start and TUESDAY 1.15 AM will end) and it will trigger every minute during this time.
Can anyone help me to write this CRON expression?<issue_comment>username_1: You'll have to do this in three lines I think,
```
15-59 23 * * *
* 0 * * *
0-15 1 * * *
```
This will run it from 23:15-23:59 then 00:00-00:59 then 1:00-1:15
Upvotes: 0 <issue_comment>username_2: The timer triggers are designed for a single repeating interval. The only way to do this completely within a Function is to run the trigger once per minute, then abort if the current time isn't in the desired target time period.
Alternately, put your logic into an HTTP trigger configured to act as a [webhook](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook), then use an Azure [scheduler](https://learn.microsoft.com/en-us/azure/scheduler/scheduler-advanced-complexity) to configure start and stop times and intervals.
You won't be able to use the scheduler free plan since it can only run once per hour, but the standard plan can run once per minute. Scheduler pricing [here](https://learn.microsoft.com/en-us/azure/scheduler/scheduler-plans-billing).
Upvotes: 2 [selected_answer]
|
2018/03/15
| 2,771 | 8,123 |
<issue_start>username_0: I am currently learning python. I do not want to use Biopython, or really any imported modules, other than maybe regex so I can understand what the code is doing.
From a genetic sequence alignment, I would like to find the location of the start and end positions of gaps/indels "-" that are next to each other within my sequences, the number of gap regions, and calculate the length of gap regions. For example:
```
>Seq1
ATC----GCTGTA--A-----T
```
I would like an output that may look something like this:
```
Number of gaps = 3
Index Position of Gap region 1 = 3 to 6
Length of Gap region 1 = 4
Index Position of Gap region 2 = 13 to 14
Length of Gap region 2 = 2
Index Position of Gap region 3 = 16 to 20
Length of Gap region 3 = 5
```
I have tried to figure this out on larger sequence alignments but I have not been able to even remotely figure out how to do this.<issue_comment>username_1: This is my take on this problem:
```
import itertools
nucleotide='ATC----GCTGTA--A-----T'
# group the repeated positions
gaps = [(k, sum(1 for _ in vs)) for k, vs in itertools.groupby(nucleotide)]
# text formating
summary_head = "Number of gaps = {0}"
summary_gap = """
Index Position of Gap region {0} = {2} to {3}
Length of Gap region {0} = {1}
"""
# Print output
print summary_head.format(len([g for g in gaps if g[0]=="-"]))
gcount = 1 # this will count the gap number
position = 0 # this will make sure we know the position in the sequence
for i, g in enumerate(gaps):
if g[0] == "-":
gini = position # start position current gap
gend = position + g[1] - 1 # end position current gap
print summary_gap.format(gcount, g[1], gini, gend)
gcount+=1
position += g[1]
```
This generates your expected output:
```
# Number of gaps = 3
# Index Position of Gap region 1 = 3 to 6
# Length of Gap region 1 = 4
# Index Position of Gap region 2 = 13 to 14
# Length of Gap region 2 = 2
# Index Position of Gap region 3 = 16 to 20
# Length of Gap region 3 = 5
```
**EDIT: ALTERNATIVE WITH PANDAS**
```
import itertools
import pandas as pd
nucleotide='ATC----GCTGTA--A-----T'
# group the repeated positions
gaps = pd.DataFrame([(k, sum(1 for _ in vs)) for k, vs in itertools.groupby(nucleotide)])
gaps.columns = ["type", "length"]
gaps["ini"] = gaps["length"].cumsum() - gaps["length"]
gaps["end"] = gaps["ini"] + gaps["length"] - 1
gaps = gaps[gaps["type"] == "-"]
gaps.index = range(1, gaps.shape[0] + 1)
summary_head = "Number of gaps = {0}"
summary_gap = """
Index Position of Gap region {0} = {1[ini]} to {1[end]}
Length of Gap region {0} = {1[length]}
"""
print summary_head.format(gaps.shape[0])
for index, row in gaps.iterrows():
print summary_gap.format(index, row)
```
This alternative has the benefit that if you are analyzing multiple sequences you can add the sequence identifier as an extra column and have all the data from all your sequences in a single data structure; something like this:
```
import itertools
import pandas as pd
nucleotides=['>Seq1\nATC----GCTGTA--A-----T',
'>Seq2\nATCTCC---TG--TCGGATG-T']
all_gaps = []
for nucleoseq in nucleotides:
seqid, nucleotide = nucleoseq[1:].split("\n")
gaps = pd.DataFrame([(k, sum(1 for _ in vs)) for k, vs in itertools.groupby(nucleotide)])
gaps.columns = ["type", "length"]
gaps["ini"] = gaps["length"].cumsum() - gaps["length"]
gaps["end"] = gaps["ini"] + gaps["length"] - 1
gaps = gaps[gaps["type"] == "-"]
gaps.index = range(1, gaps.shape[0] + 1)
gaps["seqid"] = seqid
all_gaps.append(gaps)
all_gaps = pd.concat(all_gaps)
print(all_gaps)
```
will generate a data container with:
```
type length ini end seqid
1 - 4 3 6 Seq1
2 - 2 13 14 Seq1
3 - 5 16 20 Seq1
1 - 3 6 8 Seq2
2 - 2 11 12 Seq2
3 - 1 20 20 Seq2
```
that you can format afterwards like:
```
for k in all_gaps["seqid"].unique():
seqg = all_gaps[all_gaps["seqid"] == k]
print ">{}".format(k)
print summary_head.format(seqg.shape[0])
for index, row in seqg.iterrows():
print summary_gap.format(index, row)
```
which can look like:
```
>Seq1
Number of gaps = 3
Index Position of Gap region 1 = 3 to 6
Length of Gap region 1 = 4
Index Position of Gap region 2 = 13 to 14
Length of Gap region 2 = 2
Index Position of Gap region 3 = 16 to 20
Length of Gap region 3 = 5
>Seq2
Number of gaps = 3
Index Position of Gap region 1 = 6 to 8
Length of Gap region 1 = 3
Index Position of Gap region 2 = 11 to 12
Length of Gap region 2 = 2
Index Position of Gap region 3 = 20 to 20
Length of Gap region 3 = 1
```
Upvotes: 0 <issue_comment>username_2: What you want is to use regular expression to find a gap (one or more dashes, which translate to '-+', the plus sign means *one or more*):
```
import re
seq = 'ATC----GCTGTA--A-----T'
matches = list(re.finditer('-+', seq))
print 'Number of gaps =', len(matches)
print
for region_number, match in enumerate(matches, 1):
print 'Index Position of Gap region {} = {} to {}'.format(
region_number,
match.start(),
match.end() - 1)
print 'Length of Gap region {} = {}'.format(
region_number,
match.end() - match.start())
print
```
Notes
=====
* `matches` is a list of match objects
* In order to get the region number, I used the function `enumerate`. You can look it up to see how it works.
* The match object has many methods, but we are interested in `.start()` which returns the start index and `.end()` which return the end index. Note that the *end index* here is one more that what you want, thus I subtracted 1 from it.
Upvotes: 3 [selected_answer]<issue_comment>username_3: A bit of a longer-winded way about this than with regex, but you could find the index of the hyphens and group them by using first-differences:
```
>>> def get_seq_gaps(seq):
... gaps = np.array([i for i, el in enumerate(seq) if el == '-'])
... diff = np.cumsum(np.append([False], np.diff(gaps) != 1))
... un = np.unique(diff)
... yield len(un)
... for i in un:
... subseq = gaps[diff == i]
... yield i + 1, len(subseq), subseq.min(), subseq.max()
>>> def report_gaps(seq):
... gaps = get_seq_gaps(seq)
... print('Number of gaps = %s\n' % next(gaps), sep='')
... for (i, l, mn, mx) in gaps:
... print('Index Position of Gap region %s = %s to %s' % (i, mn, mx))
... print('Length of Gap Region %s = %s\n' % (i, l), sep='')
>>> seq = 'ATC----GCTGTA--A-----T'
>>> report_gaps(seq)
Number of gaps = 3
Index Position of Gap region 1 = 3 to 6
Length of Gap Region 1 = 4
Index Position of Gap region 2 = 13 to 14
Length of Gap Region 2 = 2
Index Position of Gap region 3 = 16 to 20
Length of Gap Region 3 = 5
```
First, this forms an array of the indices at which you have hyphens:
```
>>> gaps
array([ 3, 4, 5, 6, 13, 14, 16, 17, 18, 19, 20])
```
Places where first differences are not 1 indicate breaks. Throw on another False to maintain length.
```
>>> diff
array([0, 0, 0, 0, 1, 1, 2, 2, 2, 2, 2])
```
Now take the unique elements of these groups, constrain `gaps` to the corresponding indices, and find its min/max.
Upvotes: 0 <issue_comment>username_4: Here is my suggestion of code, quite straight-forward, short and easy to understand, without any other imported package other than `re`:
```
import re
def findGaps(aSeq):
# Get and print the list of gaps present into the sequence
gaps = re.findall('[-]+', aSeq)
print('Number of gaps = {0} \n'.format(len(gaps)))
# Get and print start index, end index and length for each gap
for i,gap in enumerate(gaps,1):
startIndex = aSeq.index(gap)
endIndex = startIndex + len(gap) - 1
print('Index Position of Gap region {0} = {1} to {2}'.format(i, startIndex, endIndex))
print('Length of Gap region {0} = {1} \n'.format(i, len(gap)))
aSeq = aSeq.replace(gap,'*' * len(gap), 1)
findGaps("ATC----GCTGTA--A-----T")
```
Upvotes: 1
|
2018/03/15
| 2,798 | 8,232 |
<issue_start>username_0: Could you please help produce the output table from the below input table(screenshots are provided.Basically I would need to get the pcn from the cn based on the pid in each row.I used a case when statement but the data is huge and it's not a sustainable way,hence the self join would be fine.But I am not getting the expected output from the below self join query
Here is the self join query I tried.
```
select b.id, b.cn, a.pid, a.cn as pcn
from (
(select pid,cn from categories) a
left join (select id,cn from categories) b on a.pid=b.id
)
```
Here is the case statement I used for deriving data for some of the data
```
select id,cn,pid,
case
when pid is NULL then cn
when pid=1 then (select cn from categories where id=1)
when pid=13 then (select cn from categories where id=13)
END as pcn
from categories
```

<issue_comment>username_1: This is my take on this problem:
```
import itertools
nucleotide='ATC----GCTGTA--A-----T'
# group the repeated positions
gaps = [(k, sum(1 for _ in vs)) for k, vs in itertools.groupby(nucleotide)]
# text formating
summary_head = "Number of gaps = {0}"
summary_gap = """
Index Position of Gap region {0} = {2} to {3}
Length of Gap region {0} = {1}
"""
# Print output
print summary_head.format(len([g for g in gaps if g[0]=="-"]))
gcount = 1 # this will count the gap number
position = 0 # this will make sure we know the position in the sequence
for i, g in enumerate(gaps):
if g[0] == "-":
gini = position # start position current gap
gend = position + g[1] - 1 # end position current gap
print summary_gap.format(gcount, g[1], gini, gend)
gcount+=1
position += g[1]
```
This generates your expected output:
```
# Number of gaps = 3
# Index Position of Gap region 1 = 3 to 6
# Length of Gap region 1 = 4
# Index Position of Gap region 2 = 13 to 14
# Length of Gap region 2 = 2
# Index Position of Gap region 3 = 16 to 20
# Length of Gap region 3 = 5
```
**EDIT: ALTERNATIVE WITH PANDAS**
```
import itertools
import pandas as pd
nucleotide='ATC----GCTGTA--A-----T'
# group the repeated positions
gaps = pd.DataFrame([(k, sum(1 for _ in vs)) for k, vs in itertools.groupby(nucleotide)])
gaps.columns = ["type", "length"]
gaps["ini"] = gaps["length"].cumsum() - gaps["length"]
gaps["end"] = gaps["ini"] + gaps["length"] - 1
gaps = gaps[gaps["type"] == "-"]
gaps.index = range(1, gaps.shape[0] + 1)
summary_head = "Number of gaps = {0}"
summary_gap = """
Index Position of Gap region {0} = {1[ini]} to {1[end]}
Length of Gap region {0} = {1[length]}
"""
print summary_head.format(gaps.shape[0])
for index, row in gaps.iterrows():
print summary_gap.format(index, row)
```
This alternative has the benefit that if you are analyzing multiple sequences you can add the sequence identifier as an extra column and have all the data from all your sequences in a single data structure; something like this:
```
import itertools
import pandas as pd
nucleotides=['>Seq1\nATC----GCTGTA--A-----T',
'>Seq2\nATCTCC---TG--TCGGATG-T']
all_gaps = []
for nucleoseq in nucleotides:
seqid, nucleotide = nucleoseq[1:].split("\n")
gaps = pd.DataFrame([(k, sum(1 for _ in vs)) for k, vs in itertools.groupby(nucleotide)])
gaps.columns = ["type", "length"]
gaps["ini"] = gaps["length"].cumsum() - gaps["length"]
gaps["end"] = gaps["ini"] + gaps["length"] - 1
gaps = gaps[gaps["type"] == "-"]
gaps.index = range(1, gaps.shape[0] + 1)
gaps["seqid"] = seqid
all_gaps.append(gaps)
all_gaps = pd.concat(all_gaps)
print(all_gaps)
```
will generate a data container with:
```
type length ini end seqid
1 - 4 3 6 Seq1
2 - 2 13 14 Seq1
3 - 5 16 20 Seq1
1 - 3 6 8 Seq2
2 - 2 11 12 Seq2
3 - 1 20 20 Seq2
```
that you can format afterwards like:
```
for k in all_gaps["seqid"].unique():
seqg = all_gaps[all_gaps["seqid"] == k]
print ">{}".format(k)
print summary_head.format(seqg.shape[0])
for index, row in seqg.iterrows():
print summary_gap.format(index, row)
```
which can look like:
```
>Seq1
Number of gaps = 3
Index Position of Gap region 1 = 3 to 6
Length of Gap region 1 = 4
Index Position of Gap region 2 = 13 to 14
Length of Gap region 2 = 2
Index Position of Gap region 3 = 16 to 20
Length of Gap region 3 = 5
>Seq2
Number of gaps = 3
Index Position of Gap region 1 = 6 to 8
Length of Gap region 1 = 3
Index Position of Gap region 2 = 11 to 12
Length of Gap region 2 = 2
Index Position of Gap region 3 = 20 to 20
Length of Gap region 3 = 1
```
Upvotes: 0 <issue_comment>username_2: What you want is to use regular expression to find a gap (one or more dashes, which translate to '-+', the plus sign means *one or more*):
```
import re
seq = 'ATC----GCTGTA--A-----T'
matches = list(re.finditer('-+', seq))
print 'Number of gaps =', len(matches)
print
for region_number, match in enumerate(matches, 1):
print 'Index Position of Gap region {} = {} to {}'.format(
region_number,
match.start(),
match.end() - 1)
print 'Length of Gap region {} = {}'.format(
region_number,
match.end() - match.start())
print
```
Notes
=====
* `matches` is a list of match objects
* In order to get the region number, I used the function `enumerate`. You can look it up to see how it works.
* The match object has many methods, but we are interested in `.start()` which returns the start index and `.end()` which return the end index. Note that the *end index* here is one more that what you want, thus I subtracted 1 from it.
Upvotes: 3 [selected_answer]<issue_comment>username_3: A bit of a longer-winded way about this than with regex, but you could find the index of the hyphens and group them by using first-differences:
```
>>> def get_seq_gaps(seq):
... gaps = np.array([i for i, el in enumerate(seq) if el == '-'])
... diff = np.cumsum(np.append([False], np.diff(gaps) != 1))
... un = np.unique(diff)
... yield len(un)
... for i in un:
... subseq = gaps[diff == i]
... yield i + 1, len(subseq), subseq.min(), subseq.max()
>>> def report_gaps(seq):
... gaps = get_seq_gaps(seq)
... print('Number of gaps = %s\n' % next(gaps), sep='')
... for (i, l, mn, mx) in gaps:
... print('Index Position of Gap region %s = %s to %s' % (i, mn, mx))
... print('Length of Gap Region %s = %s\n' % (i, l), sep='')
>>> seq = 'ATC----GCTGTA--A-----T'
>>> report_gaps(seq)
Number of gaps = 3
Index Position of Gap region 1 = 3 to 6
Length of Gap Region 1 = 4
Index Position of Gap region 2 = 13 to 14
Length of Gap Region 2 = 2
Index Position of Gap region 3 = 16 to 20
Length of Gap Region 3 = 5
```
First, this forms an array of the indices at which you have hyphens:
```
>>> gaps
array([ 3, 4, 5, 6, 13, 14, 16, 17, 18, 19, 20])
```
Places where first differences are not 1 indicate breaks. Throw on another False to maintain length.
```
>>> diff
array([0, 0, 0, 0, 1, 1, 2, 2, 2, 2, 2])
```
Now take the unique elements of these groups, constrain `gaps` to the corresponding indices, and find its min/max.
Upvotes: 0 <issue_comment>username_4: Here is my suggestion of code, quite straight-forward, short and easy to understand, without any other imported package other than `re`:
```
import re
def findGaps(aSeq):
# Get and print the list of gaps present into the sequence
gaps = re.findall('[-]+', aSeq)
print('Number of gaps = {0} \n'.format(len(gaps)))
# Get and print start index, end index and length for each gap
for i,gap in enumerate(gaps,1):
startIndex = aSeq.index(gap)
endIndex = startIndex + len(gap) - 1
print('Index Position of Gap region {0} = {1} to {2}'.format(i, startIndex, endIndex))
print('Length of Gap region {0} = {1} \n'.format(i, len(gap)))
aSeq = aSeq.replace(gap,'*' * len(gap), 1)
findGaps("ATC----GCTGTA--A-----T")
```
Upvotes: 1
|
2018/03/15
| 2,827 | 8,436 |
<issue_start>username_0: I'm using **PHP 7.2.2**
I'm not able to understand following paragraph taken from the [PHP Manual](https://secure.php.net/manual/en/language.types.string.php#language.types.string.substr)
>
> **Warning** Writing to an out of range offset pads the string with spaces.
> Non-integer types are converted to integer. Illegal offset type emits
> **E\_NOTICE**. Only the first character of an assigned string is used. As
> of PHP 7.1.0, assigning an empty string throws a fatal error.
> Formerly, it assigned a NULL byte.
>
>
>
I've following doubts/questions in my mind regarding the above paragraph :
1. What does exactly mean by 'out of range offset' here?
2. Whose non-integer types are converted to integer. Is it an offset or a character from a string under consideration whose type conversion is going to happen?
3. What does exactly mean by 'Illegal offset type'?
4. When does 'only the first character of an assigned string is used'?
5. What does mean bye the last sentence 'Formerly, it assigned a NULL byte.'? Specifically what does mean by NULL byte?
Can someone please answer all of my doubts/questions in an easy to understand language with suitable working code example?<issue_comment>username_1: This is my take on this problem:
```
import itertools
nucleotide='ATC----GCTGTA--A-----T'
# group the repeated positions
gaps = [(k, sum(1 for _ in vs)) for k, vs in itertools.groupby(nucleotide)]
# text formating
summary_head = "Number of gaps = {0}"
summary_gap = """
Index Position of Gap region {0} = {2} to {3}
Length of Gap region {0} = {1}
"""
# Print output
print summary_head.format(len([g for g in gaps if g[0]=="-"]))
gcount = 1 # this will count the gap number
position = 0 # this will make sure we know the position in the sequence
for i, g in enumerate(gaps):
if g[0] == "-":
gini = position # start position current gap
gend = position + g[1] - 1 # end position current gap
print summary_gap.format(gcount, g[1], gini, gend)
gcount+=1
position += g[1]
```
This generates your expected output:
```
# Number of gaps = 3
# Index Position of Gap region 1 = 3 to 6
# Length of Gap region 1 = 4
# Index Position of Gap region 2 = 13 to 14
# Length of Gap region 2 = 2
# Index Position of Gap region 3 = 16 to 20
# Length of Gap region 3 = 5
```
**EDIT: ALTERNATIVE WITH PANDAS**
```
import itertools
import pandas as pd
nucleotide='ATC----GCTGTA--A-----T'
# group the repeated positions
gaps = pd.DataFrame([(k, sum(1 for _ in vs)) for k, vs in itertools.groupby(nucleotide)])
gaps.columns = ["type", "length"]
gaps["ini"] = gaps["length"].cumsum() - gaps["length"]
gaps["end"] = gaps["ini"] + gaps["length"] - 1
gaps = gaps[gaps["type"] == "-"]
gaps.index = range(1, gaps.shape[0] + 1)
summary_head = "Number of gaps = {0}"
summary_gap = """
Index Position of Gap region {0} = {1[ini]} to {1[end]}
Length of Gap region {0} = {1[length]}
"""
print summary_head.format(gaps.shape[0])
for index, row in gaps.iterrows():
print summary_gap.format(index, row)
```
This alternative has the benefit that if you are analyzing multiple sequences you can add the sequence identifier as an extra column and have all the data from all your sequences in a single data structure; something like this:
```
import itertools
import pandas as pd
nucleotides=['>Seq1\nATC----GCTGTA--A-----T',
'>Seq2\nATCTCC---TG--TCGGATG-T']
all_gaps = []
for nucleoseq in nucleotides:
seqid, nucleotide = nucleoseq[1:].split("\n")
gaps = pd.DataFrame([(k, sum(1 for _ in vs)) for k, vs in itertools.groupby(nucleotide)])
gaps.columns = ["type", "length"]
gaps["ini"] = gaps["length"].cumsum() - gaps["length"]
gaps["end"] = gaps["ini"] + gaps["length"] - 1
gaps = gaps[gaps["type"] == "-"]
gaps.index = range(1, gaps.shape[0] + 1)
gaps["seqid"] = seqid
all_gaps.append(gaps)
all_gaps = pd.concat(all_gaps)
print(all_gaps)
```
will generate a data container with:
```
type length ini end seqid
1 - 4 3 6 Seq1
2 - 2 13 14 Seq1
3 - 5 16 20 Seq1
1 - 3 6 8 Seq2
2 - 2 11 12 Seq2
3 - 1 20 20 Seq2
```
that you can format afterwards like:
```
for k in all_gaps["seqid"].unique():
seqg = all_gaps[all_gaps["seqid"] == k]
print ">{}".format(k)
print summary_head.format(seqg.shape[0])
for index, row in seqg.iterrows():
print summary_gap.format(index, row)
```
which can look like:
```
>Seq1
Number of gaps = 3
Index Position of Gap region 1 = 3 to 6
Length of Gap region 1 = 4
Index Position of Gap region 2 = 13 to 14
Length of Gap region 2 = 2
Index Position of Gap region 3 = 16 to 20
Length of Gap region 3 = 5
>Seq2
Number of gaps = 3
Index Position of Gap region 1 = 6 to 8
Length of Gap region 1 = 3
Index Position of Gap region 2 = 11 to 12
Length of Gap region 2 = 2
Index Position of Gap region 3 = 20 to 20
Length of Gap region 3 = 1
```
Upvotes: 0 <issue_comment>username_2: What you want is to use regular expression to find a gap (one or more dashes, which translate to '-+', the plus sign means *one or more*):
```
import re
seq = 'ATC----GCTGTA--A-----T'
matches = list(re.finditer('-+', seq))
print 'Number of gaps =', len(matches)
print
for region_number, match in enumerate(matches, 1):
print 'Index Position of Gap region {} = {} to {}'.format(
region_number,
match.start(),
match.end() - 1)
print 'Length of Gap region {} = {}'.format(
region_number,
match.end() - match.start())
print
```
Notes
=====
* `matches` is a list of match objects
* In order to get the region number, I used the function `enumerate`. You can look it up to see how it works.
* The match object has many methods, but we are interested in `.start()` which returns the start index and `.end()` which return the end index. Note that the *end index* here is one more that what you want, thus I subtracted 1 from it.
Upvotes: 3 [selected_answer]<issue_comment>username_3: A bit of a longer-winded way about this than with regex, but you could find the index of the hyphens and group them by using first-differences:
```
>>> def get_seq_gaps(seq):
... gaps = np.array([i for i, el in enumerate(seq) if el == '-'])
... diff = np.cumsum(np.append([False], np.diff(gaps) != 1))
... un = np.unique(diff)
... yield len(un)
... for i in un:
... subseq = gaps[diff == i]
... yield i + 1, len(subseq), subseq.min(), subseq.max()
>>> def report_gaps(seq):
... gaps = get_seq_gaps(seq)
... print('Number of gaps = %s\n' % next(gaps), sep='')
... for (i, l, mn, mx) in gaps:
... print('Index Position of Gap region %s = %s to %s' % (i, mn, mx))
... print('Length of Gap Region %s = %s\n' % (i, l), sep='')
>>> seq = 'ATC----GCTGTA--A-----T'
>>> report_gaps(seq)
Number of gaps = 3
Index Position of Gap region 1 = 3 to 6
Length of Gap Region 1 = 4
Index Position of Gap region 2 = 13 to 14
Length of Gap Region 2 = 2
Index Position of Gap region 3 = 16 to 20
Length of Gap Region 3 = 5
```
First, this forms an array of the indices at which you have hyphens:
```
>>> gaps
array([ 3, 4, 5, 6, 13, 14, 16, 17, 18, 19, 20])
```
Places where first differences are not 1 indicate breaks. Throw on another False to maintain length.
```
>>> diff
array([0, 0, 0, 0, 1, 1, 2, 2, 2, 2, 2])
```
Now take the unique elements of these groups, constrain `gaps` to the corresponding indices, and find its min/max.
Upvotes: 0 <issue_comment>username_4: Here is my suggestion of code, quite straight-forward, short and easy to understand, without any other imported package other than `re`:
```
import re
def findGaps(aSeq):
# Get and print the list of gaps present into the sequence
gaps = re.findall('[-]+', aSeq)
print('Number of gaps = {0} \n'.format(len(gaps)))
# Get and print start index, end index and length for each gap
for i,gap in enumerate(gaps,1):
startIndex = aSeq.index(gap)
endIndex = startIndex + len(gap) - 1
print('Index Position of Gap region {0} = {1} to {2}'.format(i, startIndex, endIndex))
print('Length of Gap region {0} = {1} \n'.format(i, len(gap)))
aSeq = aSeq.replace(gap,'*' * len(gap), 1)
findGaps("ATC----GCTGTA--A-----T")
```
Upvotes: 1
|
2018/03/15
| 673 | 2,689 |
<issue_start>username_0: Here, by validating, I mean:
- valid: return records
- not valid: no records returned by the query
Is there a built-in function inside neo4j so that I can quickly valid the query without running it. for example, something like schema checking.
**What I want to do**
quickly valid lots of queries, so that I can get the queries with results. The problem with running all of the queries is that some query can take lots of time to run, which would block the running of some following queries.
**Temporary solution**
One way I find out is by using `LIMIT 1` at the end of the query, it's could be much fast than the one without it when there is lots of records, but still the query is run inside the neo4j database
Thanks,<issue_comment>username_1: One option is to [prepend your queries with `EXPLAIN`](https://neo4j.com/docs/developer-manual/current/cypher/query-tuning/how-do-i-profile-a-query/) and execute that query. `EXPLAIN` is used to generate the execution plan for a query, but will also result in metadata by comparing your query to the database statistics. For example, if your query contains a Node Label that is not found in the database a warning will be returned in the metadata. The execution plan will also include estimate rows to be returned at each operation.
You can see all of this metadata in the Neo4j Browser when prepending a query with `EXPLAIN`. Using one of the Neo4j client drivers you can access this information in the [`ResultSummary` object](https://neo4j.com/docs/api/javascript-driver/current/class/src/v1/result-summary.js~ResultSummary.html) (JavaScript driver doc is linked for example).
Upvotes: 0 <issue_comment>username_2: The only way to see if your query will return anything is to actually run it. But you do not have to run it on your actual ("main") DB.
You can run your query against a smaller test DB whose data model matches the one in your main DB. And you can also tailor the test data so that you know ahead of time what your query should return.
To make this easier, the [Neo4j Desktop](https://neo4j.com/download/) conveniently allows you to create multiple "projects", each with its own DB.
[EDITED]
To make this process a bit more automated, you should take a look at [this example of using APOC procedures to export/import a DB subset](https://neo4j-contrib.github.io/neo4j-apoc-procedures/#_roundtrip_example). In your case, you would export from your main DB and import into an empty DB. That example's Cypher code randomly picks a limited number of nodes and relationships to copy, but you may want to use more sophisticated Cypher code to ensure you get the data you want.
Upvotes: 1
|
2018/03/15
| 532 | 1,781 |
<issue_start>username_0: I am using PDO in PHP. whenever I pass $id = "2abcd" (number with string suffixed) the query returns Successfully. (with data of $id=2)
in Database :id INT(11), img VARCHAR(100), uname VARCHAR(100)
```
public static function getImagePath($id) {
$sql_fetch_opp = "select img from my_user where id=:id";
$db = DAO::connect();
try {
$stmt = $db->prepare($sql_fetch_opp);
$stmt->bindParam(':id', $id);
$stmt->execute();
$imgPath = $stmt->fetch(PDO::FETCH_COLUMN);
return $imgPath;
} catch (PDOException $e) {
Err::log($e);
throw new MyException(Err::PDO_EXCPTION);
}
}
```<issue_comment>username_1: You just need to check if the provided value is the same as the value after you cast it to an integer. If it is not, then throw an exception.
```
public static function getImagePath($id) {
$sql_fetch_opp = "select img from my_user where id=:id";
$db = DAO::connect();
try {
if((string)$id !== (string)(int)$id)
{
throw new PDOException("Invalid id provided");
}
$stmt = $db->prepare($sql_fetch_opp);
$stmt->bindParam(':id', $id);
$stmt->execute();
$imgPath = $stmt->fetch(PDO::FETCH_COLUMN);
return $imgPath;
} catch (PDOException $e) {
Err::log($e);
throw new MyException(Err::PDO_EXCPTION);
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: [jeroen]:(<https://stackoverflow.com/users/42139/jeroen>) has given the correct solution,
I need to check
```
if(!is_numeric($id))
throw new MyException(Err::INVALID_ID);
```
Upvotes: 0 <issue_comment>username_2: This worked
```
$stmt->bindParam(':id', $id, PDO::PARAM_INT);
```
Upvotes: 0
|
2018/03/15
| 428 | 1,599 |
<issue_start>username_0: I have a table with multiple columns. I need to the get min and max value from the entire table, but also display what category that min and max value are in. The column names I need are Asset\_Type and Asset\_Value. There are multiple (5+) asset types but I only need to show the asset type of the min value and max value.
```
SELECT Asset_Type, MAX(Asset_Value), MIN(Asset_Value)
FROM Asset
GROUP BY Asset_Type
```
This is what I have, but this diplays the min and max for each asset type, not just the min and max for the table.<issue_comment>username_1: Considering that max value may have different Asset\_type than the min value, you need to make it separate query (not taking into account here that there might be multiple Asset\_types with same min/max-value.
```
(select 'max', Asset_Type, max(Asset_Value) as 'Asset_Value'
from Asset
group by Asset_Type
order by 3 desc
limit 1)
union all
(select 'min', Asset_Type, min(Asset_Value)
from Asset
group by Asset_Type
order by 3 asc
limit 1)
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: There can be many asset types with the minimum value and many with the maximum value. So simply select all asset\_types where the value either matches the minimum or the maximum value (which you look up in subqueries):
```
select distinct asset_value, asset_type
where asset_value = (select min(asset_value) from asset)
or asset_value = (select max(asset_value) from asset)
order by asset_value, asset_type;
```
There are other ways to write this query, but the idea remains the same.
Upvotes: 0
|
2018/03/15
| 755 | 3,195 |
<issue_start>username_0: I'm writing some code that uses reflection. So I'm trying to cache the expensive processing of the reflection into a `ConcurrentDictionary`. However, I want to apply a restriction limit on the concurrent dictionary to prevent storing old and unused cached values.
I did some research on my end to see how to limit the size of the `ConcurrentDictionary`. I found some interesting answers, however I don't know if the answers suits my requirements and will perform well.
I found in [Dapper](https://github.com/StackExchange/Dapper/blob/master/Dapper/SqlMapper.cs#L62) source code that they did some custom code to handle the limit of the `ConcurrentDictionary`. They have a collection limit constant that they use with [Interlocked](https://referencesource.microsoft.com/#mscorlib/system/threading/interlocked.cs,8792520ddc6dadcb) to be able to the handle the concurrency of the dictionary.
On the other hand, I found an [answer](https://stackoverflow.com/a/27404422/1504370) on SO, that uses a normal Dictionary and then applies on it a `ReaderWriterLockSlim` to handle the concurrency. I don't know if it's the same implementation in .Net source code.
Should I go with dapper implementation or the SO answer implementation?<issue_comment>username_1: The exact way to go about "Caching Information" varries a lot on Environment. Areas like WebDevelopment need totally different approaches, thanks to the massively paralell nature of the environemt and high level of seperation.
But the core thing to do caching of anything is WeakReference. Strong References prevent the collection by the GC. WeakReferences do not, but allow you to get strong references. It is the programmers way of saying "Do not keep it in memory just for the sake of this list. But if you have not collected it yet, give me a strong reference please.":
<https://msdn.microsoft.com/en-us/library/system.weakreference.aspx>
By it's nature, the GC can only collect (or tag weak references for that mater) when all other Threads are paused. So WeakReferences should not expose you to additional race conditions - it is either still there and you now have a strong reference, or it is not.
Upvotes: 0 <issue_comment>username_2: For performance, you should not use locking at all. Also, beware of hidden performance bottlenecks in the `ConcurrentDictionary` class, e.g. garbage is created when enumerating the collection (see issue [here](https://github.com/dotnet/runtime/issues/25448)).
Use [ThreadStatic](https://learn.microsoft.com/en-us/dotnet/api/system.threadstaticattribute?view=netcore-3.1)
--------------------------------------------------------------------------------------------------------------
The only sane solution for a reflection cache, is to have a separate dictionary for each thread. Yes, this costs a bit of RAM, but the performance will be superior.
```cs
public static class TypeExtensions
{
[ThreadStatic] private static Dictionary propertyInfoLookup;
private static Dictionary MemberInfoLookup =>
propertyInfoLookup ??= new Dictionary();
// Sample API
public static PropertyInfo GetPropertyInfo(this Type type) => MemberInfoLookup[type];
}
```
Upvotes: 1
|
2018/03/15
| 1,375 | 5,066 |
<issue_start>username_0: If the user clicks **X** on my main form, i want the form to hide, rather than close. This sounds like a job for the [**OnClose** form event](http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/Forms_TForm_OnClose.html):
>
> Use OnClose to perform special processing when the form closes. The OnClose event specifies which event handler to call when a form is about to close. The handler specified by OnClose might, for example, test to make sure all fields in a data-entry form have valid contents before allowing the form to close.
>
>
> A form is closed by the Close method or when the user chooses Close from the form's system menu.
>
>
> The TCloseEvent type points to a method that handles the closing of a form. The value of the Action parameter determines if the form actually closes. These are the possible values of Action:
>
>
> * **caNone**: The form is not allowed to close, so nothing happens.
> * **caHide**: The form is not closed, but just hidden. Your application can still access a hidden form.
> * **caFree**: The form is closed and all allocated memory for the form is freed.
> * **caMinimize**: The form is minimized, rather than closed. This is the default action for MDI child forms.
>
>
>
Which i test in an empty application with one form:
```
procedure TForm1.FormClose(Sender: TObject; var Action: TCloseAction);
begin
Action := caHide;
end;
```
So now when i click **X**, (rather than hiding) the form closes and the application terminates:
[](https://i.stack.imgur.com/TgNfM.png)
...which sounds like a job for the **OnClose** event...
Bonus Reading
-------------
**Vcl.Forms.pas**
```
procedure TCustomForm.Close;
var
CloseAction: TCloseAction;
begin
if fsModal in FFormState then
ModalResult := mrCancel
else if CloseQuery then
begin
if FormStyle = fsMDIChild then
if biMinimize in BorderIcons then
CloseAction := caMinimize
else
CloseAction := caNone
else
CloseAction := caHide;
DoClose(CloseAction);
if CloseAction <> caNone then
begin
if Application.MainForm = Self then //Borland doesn't hate developers; it just hates me
Application.Terminate
else if CloseAction = caHide then
Hide
else if CloseAction = caMinimize then
WindowState := wsMinimized
else
Release;
end;
end;
end;
```
Bonus Reading
-------------
* [How to make hovering over Minimize, Maximize, and Close buttons behave?](https://stackoverflow.com/questions/31630280/how-to-make-hovering-over-minimize-maximize-and-close-buttons-behave)
* [Hide form instead of closing when close button clicked](https://stackoverflow.com/questions/2021681/hide-form-instead-of-closing-when-close-button-clicked)
* [How to show a modal dialog from a modeless form?](https://stackoverflow.com/questions/31705874/how-to-show-a-modal-dialog-from-a-modeless-form) *(Did Windows, WinForms, WPF, MessageBox, TaskDialog, ProgressDialog, SHFileOperation, IFileOperation all get it wrong? Nobody ever uses modeless windows?)*<issue_comment>username_1: Try the `OnCloseQuery` event. Hide the form and set `CanClose` to False. You should be good.
```
procedure TForm1.FormCloseQuery(Sender: TObject; var CanClose: Boolean);
begin
Hide;
CanClose := False;
end;
```
Upvotes: 3 <issue_comment>username_2: When the user closes a window, it receives a `WM_CLOSE` message, which triggers `TForm` to call its `Close()` method on itself. Calling `Close()` on the project's `MainForm` *always* terminates the app, as this is hard-coded behavior in `TCustomForm.Close()`:
```
procedure TCustomForm.Close;
var
CloseAction: TCloseAction;
begin
if fsModal in FFormState then
ModalResult := mrCancel
else
if CloseQuery then
begin
if FormStyle = fsMDIChild then
if biMinimize in BorderIcons then
CloseAction := caMinimize else
CloseAction := caNone
else
CloseAction := caHide;
DoClose(CloseAction);
if CloseAction <> caNone then
if Application.MainForm = Self then Application.Terminate // <-- HERE
else if CloseAction = caHide then Hide
else if CloseAction = caMinimize then WindowState := wsMinimized
else Release;
end;
end;
```
Only secondary `TForm` objects respect the output of the `OnClose` handler.
To do what you are asking for, you can either:
* handle `WM_CLOSE` directly and skip `Close()`.
```
private
procedure WMClose(var Message: TMessage); message WM_CLOSE;
procedure TForm1.WMClose(var Message: TMessage);
begin
Hide;
// DO NOT call inherited ...
end;
```
* have your MainForm's `OnClose` handler call `Hide()` directly and return `caNone`:
```
procedure TForm1.FormClose(Sender: TObject; var Action: TCloseAction);
begin
Hide;
Action := caNone;
end;
```
Upvotes: 5 [selected_answer]
|
2018/03/15
| 682 | 1,868 |
<issue_start>username_0: I have a data frame with strings under a variable with the `|` character. What I want is to remove anything downstream of the `|` character.
For example, considering the string
```
heat-shock protein hsp70, putative | location=Ld28_v01s1:1091329-1093293(-) | length=654 | sequence_SO=chromosome | SO=protein_coding
```
I wish to have only:
```
heat-shock protein hsp70, putative
```
Do I need any escape character for the `|` character?
If I do:
```
a <- c("foo_5", "bar_7")
gsub("*_.", "", a)
```
I get:
```
[1] "foo" "bar"
```
i.e. I am removing anything downstream of the `_` character.
However, If I repeat the same task with a `|` instead of the `_`:
```
b <- c("foo|5", "bar|7")
gsub("*|.", "", a)
```
I get:
```
[1] "" ""
```<issue_comment>username_1: You have to scape `|` by adding `\\|`. Try this
```
> gsub("\\|.*$", "", string)
[1] "heat-shock protein hsp70, putative "
```
where `string` is
```
string <- "heat-shock protein hsp70, putative | location=Ld28_v01s1:1091329-1093293(-) | length=654 | sequence_SO=chromosome | SO=protein_coding"
```
This alternative remove the space at the end of line in the output
```
gsub("\\s+\\|.*$", "", string)
[1] "heat-shock protein hsp70, putative"
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Maybe a better job for `strsplit` than for a `gsub`
And yes, it looks like the pipe does need to be escaped.
```
string <- "heat-shock protein hsp70, putative | location=Ld28_v01s1:1091329-1093293(-) | length=654 | sequence_SO=chromosome | SO=protein_coding"
strsplit(string, ' \\| ')[[1]][1]
```
That outputs
```
"heat-shock protein hsp70, putative"
```
Note that I'm assuming you only want the text from before the first pipe, and that you want to drop the space that separates the pipe from the piece of the string you care about.
Upvotes: 0
|
2018/03/15
| 466 | 1,331 |
<issue_start>username_0: Here is my test set:
```
master_ref ref value
56279276 56279325 FRAME ASSEMBLY1
56279276 384062724 FRAME ASSEMBLY2
56279276 443450775 FRAME ASSEMBLY3
```
I want to retrieve the value field based on the highest ref given a master\_ref.
Here is what I tried that just return everything:
```
select first_value(value) over (partition by ida3masterreference order by ida3a4 desc) value, ida3masterreference, ida3a4 from sts_epm_title1;
```
I expected to only get one result:
```
master_ref ref value
56279276 443450775 FRAME ASSEMBLY3
```
But it still returns all 3 results. What am I doing wrong? Thanks!<issue_comment>username_1: You could use `ROW_NUMBER/RANK`:
```
WITH cte AS (
select row_number() over (partition by ida3masterreference
order by ida3a4 desc) AS rn, t.*
from sts_epm_title1 t
)
SELECT *
FROM cte
WHERE rn = 1;
```
Upvotes: 0 <issue_comment>username_2: `first_value()` is an analytic function, so it does not reduce the number of rows. You apparently want an aggregation function so use the `keep` syntax:
```
select max(value) keep (dense_rank first order by ida3a4 desc) as value,
ida3masterreference, max(ida3a4) as ida3a4
from sts_epm_title1
group by ida3masterreference
```
Upvotes: 2
|
2018/03/15
| 495 | 1,630 |
<issue_start>username_0: Here i am trying to take top 2 highest salary in my table. i am trying mysql but it is throwing error.
My Table
```
CREATE TABLE `employees` (
`empId` int(11) NOT NULL AUTO_INCREMENT,
`firstName` varchar(100) COLLATE utf16_bin NOT NULL,
`lastName` varchar(100) COLLATE utf16_bin NOT NULL,
`gender` enum('Male','Female') COLLATE utf16_bin NOT NULL,
`salary` double NOT NULL,
PRIMARY KEY (`empId`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf16 COLLATE=utf16_bin
```
SQL
```
SELECT DISTINCT TOP 2 `salary` FROM `employees` ORDER BY `salary` DESC
```
ERROR
>
> 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '2 `salary` FROM `employees` ORDER BY `salary` DESC
>
>
><issue_comment>username_1: Not all database systems support the SELECT TOP clause.
In your case you can use
```
SELECT * FROM `employees` ORDER BY `salary` DESC LIMIT 2
```
Upvotes: 1 <issue_comment>username_2: >
> My main question is i have to find N th salary,using your query how we
> can achive
>
>
>
One and i think the best option is to use MySQL's user variables.
**Query**
```
SELECT
*
FROM (
SELECT
*
, (@rank := @rank + 1 ) AS rank
FROM
employees
CROSS JOIN ( SELECT @rank := 0 ) AS init_user_var
ORDER BY
employees.salary DESC
)
AS employees_salary_ranked
WHERE
employees_salary_ranked.rank = [number]
```
Upvotes: 0 <issue_comment>username_3: to find nth highest salary-
```
SELECT salary FROM Employee ORDER BY salary DESC LIMIT N-1, 1
```
Upvotes: 0
|
2018/03/15
| 534 | 1,647 |
<issue_start>username_0: I want to perform a column summation and once a certain threshold is met, I want to erase the summed so far value and not allow it to add any new values for the next 3 steps, but I am not sure how to do this. Can someone help me. Here is what I have so far.
```
for (i = 0; i < col; i++){
sumC = 0;
for (j = 0; j < row; j++)
{
sumC += matrix[j][i];
if(sumC>1.5){
sumC=0;
}
}
}
```<issue_comment>username_1: You could just advance the inner loop counter, e.g.
```
for (i = 0; i < col; i++) {
sumC = 0;
for (j = 0; j < row; j++)
{
sumC += matrix[j][i];
if (sumC > 1.5) {
sumC = 0;
j += 3; // <<<
}
}
}
```
NB: this assumes that you don't want to carry over the "next 3 steps" into the following column in the case where the threshold is reached near the end of a column.
Upvotes: 3 <issue_comment>username_2: A possible solution is to add a counter `c` that resets after 3 steps:
```
int c = 0;
for (i = 0; i < col; i++) {
sumC = 0;
for (j = 0; j < row; j++)
{
if (c == 0) {
sumC += matrix[j][i];
} else {
c = (c + 1) % 4;
}
if (sumC > 1.5) {
sumC = 0;
c++;
}
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: I don't know whether I understood corectelly:
put a variabke before the loop, let's say double Sumc2=Threshold.
Put a condituon IF, when reached the value to reset the Sumc and, in the sane sequence, encrease the step you want (i or j) with 3.
Upvotes: -1
|
2018/03/15
| 1,179 | 4,039 |
<issue_start>username_0: Am still new to Reactjs, thought have tried all I can, but still can't get through this error **"`TypeError: Cannot read property 'id' of undefined`"**
please someone help below is my code.
```
if(this.props.controller === 'location'){
eventData = Object.keys(this.props.heda).map( evKey => {
return Object.keys(evKey).map( post => {
return [...Array(this.props.heda[evKey][post])].map( lol => {
return this.viewSelectedEvent(lol['id']) }/>;
}) ;
});
});
}
```
Error full stacktrace:
[](https://i.stack.imgur.com/HJdjH.png)
here is my data from flask that am trying to loop.Am trying to convert each object into an array, then loop through the array. I also tried to console.log(lol) and i get the data as in the image below
[](https://i.stack.imgur.com/84DzZ.png)
```
events = [
{
'id': 1,
'title': u'HHGHHMjshjskksjks',
'description': u'Cras justo odio dapibus ac facilisis in egestas eget qua ',
'location':'jkknxjnj',
'category':'party',
'rsvp': False,
'event_owner':1
},
{
'id': 2,
'title': u'khjhjshjsjhdndjdh',
'description': u'jhhnbsbnsbj',
'location':'jhjhsjhjhsjhjdhsd',
'category':'party',
'rsvp': False,
'event_owner':2
},
{
'id': 3,
'title': u'jhjshjsdhjshdjshjsd',
'description': u'Cras justo odio, dapibus ac facilisis in, egestas eget quam. Donec elit non mi porta gravida at eget metus.',
'location':'kjkshjhjhjbsnbsd',
'category':'party',
'rsvp': False,
'event_owner':2
},
{
'id': 4,
'title': u'jjhjshjhsjhjshjjhjhd',
'description': u'Cras justo odio, dapibus ac facilisis in, egestas eget quam. Donec elit non mi porta gravida at eget metus.',
'location':'kjisisiisdsds',
'category':'party',
'rsvp': False,
'event_owner':2
},
{
'id': 5,
'title': u'uiujsdshuuihuyksjhjs',
'description': u'Cras justo odio, dapibus ac facilisis in, egestas eget quam. Donec elit non mi porta gravida at eget metus.',
'location':'sjnsisuis',
'category':'party',
'rsvp': False,
'event_owner':2
},
{
'id': 6,
'title': u'iusijksuiksuhj suyuys jhu ',
'description': u'Cras justo odio, dapibus ac facilisis in, egestas eget quam. Donec elit non mi porta gravida at eget metus.',
'location':'isuisiiws',
'category':'party',
'rsvp': False,
'event_owner':2
},
{
'id': 7,
'title': u'<NAME>',
'description': u'Cras justo odio, dapibus ac facilisis in, egestas eget quam. Donec elit non mi porta gravida at eget metus.',
'location':'area h',
'category':'party',
'rsvp': False,
'event_owner':2
},
]
```<issue_comment>username_1: With `[...Array(this.props.heda[evKey][post])]` you create an array which has as the first element the array `this.props.heda[evKey][post])`.
Maybe you wanted to say `[...this.props.heda[evKey][post]]` to create a clone of the array?
Upvotes: 1 <issue_comment>username_2: Because he is not able to find id in `this.props.heda[evKey][post]` object. So firstly you should console your `this.props.heda[evKey][post]` object after that I think you will get your actual problem.
Upvotes: 0 <issue_comment>username_3: Please try below snippet:
```
this.props.heda.map((evVal, evKey) => {
return evVal.map((postVal, postKey) => {
...
}
}
```
Now this.props.heda[evKey][postKey] will give you the expected object.
Upvotes: 0
|
2018/03/15
| 928 | 3,146 |
<issue_start>username_0: I'm trying to create a new branch which contains a different version of my project. Unfortunately the newer version's files, while they have different contents, don't get noticed by git as changed and can't be committed.
The folder\files are nearly identical and were placed into the directory at the same time from a backup. The contents of some files are different and I need these changes reflected in a new branch.
By way of example, take this simple mockup I've tried using 2 text files.
File structure:
```
project/
├── older/
│ ├── File 1.txt
│ ├── File 2.txt
├── newer/
│ ├── File 1.txt
│ └── File 2.txt
```
Structure is similar for my actual project, just with a lot more files and subfolders.
```
Mr JF@Computer MINGW64 ~/Desktop/testproject (master)
$ git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
Mr JF@Computer MINGW64 ~/Desktop/testproject (master)
$ stat -c "%y %s %n" *
2018-03-15 15:43:35.764654900 +0000 15 File 1.txt
2018-03-15 15:43:35.765656300 +0000 17 file 2.txt
Mr JF@Computer MINGW64 ~/Desktop/testproject (master)
$ git checkout -b newerbranch
Switched to branch 'newerbranch'
```
I copy the newer version of File 1 & 2.txt into the repository here, then:
```
Mr JF@Computer MINGW64 ~/Desktop/testproject (newerbranch)
$ git status
On branch newerbranch
nothing to commit, working tree clean
```
What's going on here?<issue_comment>username_1: The comment section isnt big enough for me to paste my own test so ill use this. If i were you i'd double check that the files you copied into the branch really are different. When i repeat your test it works fine:
```
test $ls
file1.txt file2.txt
test $git status
On branch newerbranch
nothing to commit, working tree clean
test $mkdir new
test $cd new
new $echo "new file1" > file1.txt
new $echo "new file2" > file2.txt
new $cd ..
test $rm file1.txt file2.txt
test $mv new/* .
test $rm -rf new/
test $git status
On branch newerbranch
Changes not staged for commit:
(use "git add ..." to update what will be committed)
(use "git checkout -- ..." to discard changes in working directory)
modified: file1.txt
modified: file2.txt
no changes added to commit (use "git add" and/or "git commit -a")
```
Upvotes: 0 <issue_comment>username_2: I would try these things:
1. Make sure that the files are not ignored by running `git check-ignore -v path/to/file`
2. Run `git update-index --really-refresh` to refresh the index and ignore any files that may have been muted using `git update-index --assume-unchanged`
If indeed this is not a user error, I would assume the issue lies with the underlying filesystem and/or how Git interfaces with it.
Upvotes: 5 [selected_answer]<issue_comment>username_3: So I developed this issue as well. It turns out that since you are certain the only file changes is the folder name. Just rename the working directory folder to something else. Git will pick the new change. Commit the new changes. Then you can rename it to the right name you want. Git will pick it up the changes again and then git has the updated folder name.
Upvotes: 1
|
2018/03/15
| 726 | 2,654 |
<issue_start>username_0: I am trying out python, basically newbie. What I wanted to do is to save or store all generated list into one list and access that list later. The list is generated from a loop.
Basically my code is:
```
def doSomething():
list = []
....some parsing procedures....
......
...here needed texts are extracted...
...so I am looping to extract all texts I need...
for a in something:
list.append(a)
```
After the first execution, list is now populated...
then the program proceeds into the next page which is basically the same structure and then again invoke the doSomething function.
I hope this is now clear..
Assuming the first, second and third loop etc. generated this:
```
1st loop: [1,2,3]
2nd loop: [4,5,6]
3rd loop: [7,8,9]
```
I wanted to save these lists into one list and access it later so that:
```
alllist = [1,2,3,4,5,6,7,8,9]
```
How can I achieve this?<issue_comment>username_1: The comment section isnt big enough for me to paste my own test so ill use this. If i were you i'd double check that the files you copied into the branch really are different. When i repeat your test it works fine:
```
test $ls
file1.txt file2.txt
test $git status
On branch newerbranch
nothing to commit, working tree clean
test $mkdir new
test $cd new
new $echo "new file1" > file1.txt
new $echo "new file2" > file2.txt
new $cd ..
test $rm file1.txt file2.txt
test $mv new/* .
test $rm -rf new/
test $git status
On branch newerbranch
Changes not staged for commit:
(use "git add ..." to update what will be committed)
(use "git checkout -- ..." to discard changes in working directory)
modified: file1.txt
modified: file2.txt
no changes added to commit (use "git add" and/or "git commit -a")
```
Upvotes: 0 <issue_comment>username_2: I would try these things:
1. Make sure that the files are not ignored by running `git check-ignore -v path/to/file`
2. Run `git update-index --really-refresh` to refresh the index and ignore any files that may have been muted using `git update-index --assume-unchanged`
If indeed this is not a user error, I would assume the issue lies with the underlying filesystem and/or how Git interfaces with it.
Upvotes: 5 [selected_answer]<issue_comment>username_3: So I developed this issue as well. It turns out that since you are certain the only file changes is the folder name. Just rename the working directory folder to something else. Git will pick the new change. Commit the new changes. Then you can rename it to the right name you want. Git will pick it up the changes again and then git has the updated folder name.
Upvotes: 1
|
2018/03/15
| 1,065 | 3,941 |
<issue_start>username_0: like title says, I can not install font awesome 5 with npm for scss purposes.
Trying to install version 5
---------------------------
Accoring to <https://fontawesome.com/how-to-use/use-with-node-js>
`npm i --save @fortawesome/fontawesome`
Looking through the installation in node\_modules I see no scss file to hook up to. I tried including the styles.css file in my app.scss file but that did not work.
My setup for version 4:
-----------------------
package.json
`"font-awesome": "^4.7.0",`
app.scss
`@import "node_modules/font-awesome/scss/font-awesome.scss";`
Usage
Easy as pie. How can I achieve this with version 5?? Am I using the wrong package?
---
UPDATE
======
Apparently, just using @fortawesome/fontawesome is not eanough. The packages have been split up so you also have to select
`npm install --save @fortawesome/fontawesome-free-regular`
Still I have no success importing it<issue_comment>username_1: My understanding of FontAwesome 5 is that they use javascript to add inline SVGs to the DOM.
Take a look at this article: [SVG with JS](https://fontawesome.com/how-to-use/svg-with-js)
Rather than compiling an scss file, you just need to include this javascipt one: `fontawesome-all.js`, any icons you add to your HTML should then be converted to SVGs automatically.
Upvotes: 0 <issue_comment>username_2: ```
npm install --save-dev @fortawesome/fontawesome
npm install --save-dev @fortawesome/free-regular-svg-icons
npm install --save-dev @fortawesome/free-solid-svg-icons
npm install --save-dev @fortawesome/free-brands-svg-icons
```
In your `app.js` or equivalent Javascript file,
```
import fontawesome from '@fortawesome/fontawesome'
import regular from '@fortawesome/free-regular-svg-icons'
import solid from '@fortawesome/free-solid-svg-icons'
import brands from '@fortawesome/free-brands-svg-icons'
fontawesome.library.add(regular)
fontawesome.library.add(solid)
fontawesome.library.add(brands)
```
For usage, there are slight changes to the way the class names are being used. Please refer to the icons on Fontawesome site for the "full" class names.
Example
```
```
---
Although the idea of adding all the 3 variants of fonts into the project seems to be a convenient thing, do beware that this will slow performance when building/compiling your project. Thus, it is highly recommended that you add the fonts you need instead of everything.
Upvotes: 5 [selected_answer]<issue_comment>username_3: In case if you'd like to extract all the Font Awesome specific Javascript files to the `vendor.js` using Laravel Mix, you'll need to only extract the packages to the `extract()` method as an array. You won't even need to use the `fontawesome.library.add()` method to add support for the fonts. This will help you to manage the vendor specific file cache in future.
```
mix.js('resources/assets/js/app.js', 'public/js')
.extract([
'@fortawesome/fontawesome',
'@fortawesome/fontawesome-free-brands',
'@fortawesome/fontawesome-free-regular',
'@fortawesome/fontawesome-free-solid',
'@fortawesome/fontawesome-free-webfonts'
]);
```
Upvotes: 2 <issue_comment>username_4: Somehow including fonts in javascript doesn't feel right to me, it's a font and it should belong in css, so here is how to install it using scss:
1. `npm install --save @fortawesome/fontawesome-free`
2. add `@import '~@fortawesome/fontawesome-free/css/all.min.css'` in your app.scss file
3. now include app.css to your head and tadaa done
Upvotes: 4 <issue_comment>username_5: Add the package;
```
npm install --save @fortawesome/fontawesome-free
```
and add the scss files in your app.scss;
```
$fa-font-path: "~@fortawesome/fontawesome-free/webfonts";
@import "~@fortawesome/fontawesome-free/scss/fontawesome.scss";
@import "~@fortawesome/fontawesome-free/scss/regular.scss";
```
then refer the icon in your html files;
```
```
.
Upvotes: 2
|
2018/03/15
| 915 | 3,484 |
<issue_start>username_0: All,
I have an Excel Spreadsheet that has a row of Images that were add to the sheet by Insert\Picture From File popup window. Instead of imbedding the image I choose the option to link to File. I’m now moving the sheet to an Access DB but I can’t figure out how to extract the path information for each linked image from the image row?
Does anyone know how I would accomplish this? Any help would be greatly appreciated, thanks in advance - CES<issue_comment>username_1: My understanding of FontAwesome 5 is that they use javascript to add inline SVGs to the DOM.
Take a look at this article: [SVG with JS](https://fontawesome.com/how-to-use/svg-with-js)
Rather than compiling an scss file, you just need to include this javascipt one: `fontawesome-all.js`, any icons you add to your HTML should then be converted to SVGs automatically.
Upvotes: 0 <issue_comment>username_2: ```
npm install --save-dev @fortawesome/fontawesome
npm install --save-dev @fortawesome/free-regular-svg-icons
npm install --save-dev @fortawesome/free-solid-svg-icons
npm install --save-dev @fortawesome/free-brands-svg-icons
```
In your `app.js` or equivalent Javascript file,
```
import fontawesome from '@fortawesome/fontawesome'
import regular from '@fortawesome/free-regular-svg-icons'
import solid from '@fortawesome/free-solid-svg-icons'
import brands from '@fortawesome/free-brands-svg-icons'
fontawesome.library.add(regular)
fontawesome.library.add(solid)
fontawesome.library.add(brands)
```
For usage, there are slight changes to the way the class names are being used. Please refer to the icons on Fontawesome site for the "full" class names.
Example
```
```
---
Although the idea of adding all the 3 variants of fonts into the project seems to be a convenient thing, do beware that this will slow performance when building/compiling your project. Thus, it is highly recommended that you add the fonts you need instead of everything.
Upvotes: 5 [selected_answer]<issue_comment>username_3: In case if you'd like to extract all the Font Awesome specific Javascript files to the `vendor.js` using Laravel Mix, you'll need to only extract the packages to the `extract()` method as an array. You won't even need to use the `fontawesome.library.add()` method to add support for the fonts. This will help you to manage the vendor specific file cache in future.
```
mix.js('resources/assets/js/app.js', 'public/js')
.extract([
'@fortawesome/fontawesome',
'@fortawesome/fontawesome-free-brands',
'@fortawesome/fontawesome-free-regular',
'@fortawesome/fontawesome-free-solid',
'@fortawesome/fontawesome-free-webfonts'
]);
```
Upvotes: 2 <issue_comment>username_4: Somehow including fonts in javascript doesn't feel right to me, it's a font and it should belong in css, so here is how to install it using scss:
1. `npm install --save @fortawesome/fontawesome-free`
2. add `@import '~@fortawesome/fontawesome-free/css/all.min.css'` in your app.scss file
3. now include app.css to your head and tadaa done
Upvotes: 4 <issue_comment>username_5: Add the package;
```
npm install --save @fortawesome/fontawesome-free
```
and add the scss files in your app.scss;
```
$fa-font-path: "~@fortawesome/fontawesome-free/webfonts";
@import "~@fortawesome/fontawesome-free/scss/fontawesome.scss";
@import "~@fortawesome/fontawesome-free/scss/regular.scss";
```
then refer the icon in your html files;
```
```
.
Upvotes: 2
|
2018/03/15
| 794 | 2,833 |
<issue_start>username_0: I have a very simple mongo scheme I'm accessing with mongoose
I can map the username and firstname to each notification's from field by using populate, the issue is I can't seem to get any sorting to work on the date field
With this code I get an error of
>
> MongooseError: Cannot populate with sort on path notifications.from
> because it is a subproperty of a document array
>
>
>
Is it possible to do this a different way, or newer way (deep populate, virtuals)? I'm on Mongoose 5.
I'd rather not use vanilla javascript to sort the object afterwards or create a separate schema
```
var UserSchema = new Schema({
username: String,
firstname: String,
notifications: [
{
from: { type: Schema.Types.ObjectId, ref: 'User'},
date: Date,
desc: String
}
]
});
app.get('/notifications', function(req, res) {
User.findOne({ _id: req._id }, 'notifications')
.populate({
path: 'notifications.from',
populate: {
path: 'from',
model: 'User',
options: { sort: { 'notifications.date': -1 } }
}
})
.exec(function(err, user) {
if (err) console.log(err)
})
});
```
That possible duplicate is almost 2 years old about Mongo. I'm asking if there are newer or different ways of doing this in Mongoose as it has changed a bit since 2016 with newer features.<issue_comment>username_1: From Mongoose V5.0.12 FAQ : <http://mongoosejs.com/docs/faq.html#populate_sort_order>
>
> **Q.** I'm populating a nested property under an array like the below
> code:
>
>
> new Schema({
> arr: [{
> child: { ref: 'OtherModel', type: Schema.Types.ObjectId }
> }] });
>
>
> .populate({ path: 'arr.child', options: { sort: 'name' } }) won't sort by arr.child.name?
>
>
> **A.** See [this GitHub issue](https://github.com/Automattic/mongoose/issues/2202). It's a known issue but one that's
> exceptionally difficult to fix.
>
>
>
So unfortunately, for now, it's not possible,
One way to achieve this is to simply use `javascript`'s native `sort` to sort the notifications after fetching.
```
.exec(function(err, user) {
if (err) console.log(err)
user.notifications.sort(function(a, b){
return new Date(b.date) - new Date(a.date);
});
})
```
Upvotes: 2 <issue_comment>username_2: ```
It can be achievable using nesting populate like this -
eg - schema - {donationHistory: {campaignRequestId: [ref ids]}}
await user.populate({
path: 'donationHistory.campaignRequestId',
populate: [{
path: 'campaignRequestId',
model: 'CampaignRequest',
options: { sort: { 'createdAt': -1 } },
}],
...deepUserPopulation,
}).execPopulate();
```
Upvotes: 0
|
2018/03/15
| 748 | 2,347 |
<issue_start>username_0: I have a function that generates a random 3-character alpha-numeric string. I need to modify it in such a way that the new string consisted of 2 alpha and 2 numeric characters. The combination of numbers and letters can be random.
```
function generate_random($length = 3) {
$characters = '123456789ABCDEFGHJKLMNPRSTUVWXYZ';
$rand_str = '';
for ($p = 0; $p < $length; $p++) {
$rand_str .= $characters[mt_rand(0, strlen($characters)-1)];
}
return $rand_str;
}
```
I need to modify it in such a way that the new string consisted of 2 alpha and 2 numeric characters. The combination of numbers and letters can be random. How do I do that?<issue_comment>username_1: You could separe numbers and letter. Then, append N values of each into an array, shuffle it, and the implode to get your string:
```
function generate_random($nNumbers = 2, $nAlpha = 2) {
// prepare data to use
$num = '123456789';
$numlen = strlen($num) - 1;
$alpha = 'ABCDEFGHJKLMNPRSTUVWXYZ';
$alphalen = strlen($alpha) - 1;
$out = []; // New array
// generate N numbers
for ($i = 0; $i < $nNumbers ; $i++) {
$out[] = $num[mt_rand(0, $numlen)];
}
// generate N letters
for ($i = 0; $i < $nAlpha ; $i++) {
$out[] = $alpha[mt_rand(0, $alphalen)];
}
shuffle($out); // Shuffle the array
return implode($out); // Convert to string
}
echo generate_random() ;
// echo generate_random(2, 4) ; // example
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: I would personally do it this way:
```
function generate_random($countAlpha = 2, $countNumeric = 2, $randomize = true) {
$alpha = 'ABCDEFGHJKLMNPRSTUVWXYZ';
$numeric = '123456789';
$rand_str = '';
for ($p = 0; $p < $countAlpha; $p++) {
$rand_str .= $alpha[mt_rand(0, strlen($alpha)-1)];
}
for ($p = 0; $p < $countNumeric; $p++) {
$rand_str .= $numeric[mt_rand(0, strlen($numeric)-1)];
}
if($randomize) {
$rand_str = str_split($rand_str);
shuffle($rand_str);
return implode($rand_str);
}
return $rand_str;
}
```
Inside I have 2 for loops, each one based on parameters `$countAlpha` and `$countNumeric`. I also have a 3rd parameter, `$randomize` that will allow you to randomize the output if you wish.
Upvotes: 2
|
2018/03/15
| 589 | 2,187 |
<issue_start>username_0: I have a primeng table (the new module) with editable cells displaying simple data. Outside the table I have a button to add new empty objects/rows to the table and I would like to jump to edit mode in the new cell programmatically when the button is clicked.
I am able to add the empty row to the table but I have no idea how to switch the new row cell element into edit mode.
I am getting the EditableColumn like this: `@ViewChild(EditableColumn) editableColumn: EditableColumn;`
```
...
...
|
```
On the `editableColumn` I am able to call the openCell() method but it always jumps to the first generated cell and not to the new generated row cell.
Can anyone help with this?
Are there any better approaches to achieve my goal?<issue_comment>username_1: This can be easily done with:
Primeng table:
```
```
Typescript:
```
addRow(dt: Table) { dt.editingCell = document.getElementById('cellId');}
```
Upvotes: -1 <issue_comment>username_2: The trick to this one is to call onClick(event) on the editable column you would like to open.
```
editableColumn.onClick(e);
```
You can get a reference to the column a number of ways, one is using ViewChildren
```
@ViewChildren(EditableColumn) private editableColumns: QueryList;
```
Upvotes: 1 <issue_comment>username_3: Having the same problem. I've figured out a solution for my case of columns being generated dynamically. I had to open one specific cell of each row if sibling table fields cells were clicked. Eg. on the click of a city field, I should open Address cell.
I'm aware the solution might look ugly, but it's the only possible solution I could find.
TS:
```
@ViewChild(EditableColumn) editableColumn: EditableColumn
@ViewChildren(EditableColumn) private editableColumns: QueryList
openGoogleAddress(event): void {
if (event?.path) {
const currentRowAddressCell = this.editableColumns.filter(
item => item?.el?.nativeElement === event?.path[3]?.children[1]
)
this.editableColumn = currentRowAddressCell[0]
setTimeout(() => {
this.editableColumn.openCell()
}, 0)
}
}
```
HTML:
```
{{
(rowData[col.value][col.sublabel] | titlecase) || emptyDash
}}
```
Upvotes: 2
|
2018/03/15
| 1,411 | 4,786 |
<issue_start>username_0: When a Sqlite column is declared as a `timestamp`, I understand that this works:
```
import sqlite3, datetime
dbconn = sqlite3.connect(':memory:', detect_types=sqlite3.PARSE_DECLTYPES)
c = dbconn.cursor()
c.execute('create table mytable(title text, t timestamp)')
c.execute('insert into mytable (title, t) values (?, ?)', ("hello", datetime.datetime(2018,3,10,12,12,00)))
c.execute('insert into mytable (title, t) values (?, ?)', ("hello2", datetime.datetime(2018,3,1,0,0,00)))
d1 = datetime.datetime(2018,3,10)
d2 = datetime.datetime(2018,3,11)
c.execute("select * from mytable where t >= ? and t < ?", (d1, d2))
for a in c.fetchall():
print a
# Result: (u'hello', datetime.datetime(2018, 3, 10, 12, 12))
```
But why does it still work when column `t` is defined as `TEXT`? It seems, according to [this doc](https://www.sqlite.org/datatype3.html#date_and_time_datatype), that the datetime will then be stored as a string. Then **why does such a `>=`, `<` comparison still work?** Is it coincidence or good practice?
```
import sqlite3, datetime
dbconn = sqlite3.connect(':memory:', detect_types=sqlite3.PARSE_DECLTYPES)
c = dbconn.cursor()
c.execute('create table mytable(title text, t text)') # <--- here t has TEXT type
c.execute('insert into mytable (title, t) values (?, ?)', ("hello", datetime.datetime(2018,3,10,12,12,00)))
c.execute('insert into mytable (title, t) values (?, ?)', ("hello2", datetime.datetime(2018,3,1,0,0,00)))
d1 = datetime.datetime(2018,3,10)
d2 = datetime.datetime(2018,3,11)
c.execute("select * from mytable where t >= ? and t < ?", (d1, d2))
for a in c.fetchall():
print a
# Result: (u'hello', u'2018-03-10 12:12:00')
```
The strange thing is: **if a datetime is just stored as a string, why does such a time interval comparison work?** Is it just luck that the strings are compared (how with `<` and `>`?) correctly here?<issue_comment>username_1: >
> why does it still work when column t is defined as TEXT? It seems, according to > this doc, that the datetime will then be stored as a string. Then why does such > a >=, < comparison still work? Is it coincidence or good practice?
>
>
>
Regardless of whether column t is defined as TEXT or TIMESTAMP, the datetime is stored as a string.
>
> Is it just luck that the strings are compared (how with < and >?) correctly here?
>
>
>
Yes. In Python 2, strings are compared by the ASCII values of their characters, from left to right. So if you have two strings, "001" and "010", python compares the first characters: "0" and "0", finds them to be equal, and then compares the next characters: "0" and "1". Because "1" has a greater ASCII value than "0", the string "010" is greater than "001".
If you are worried about different datetime formats, you can use the sqlite's datetime functions (<https://www.sqlite.org/lang_datefunc.html>) to convert between them.
Upvotes: 0 <issue_comment>username_2: The [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) date/time format `YYYY-MM-DD HH:MM::SS` is specifically designed to permit lexicographic sorting that corresponds to chronological sort order. So, comparisons between strings in this format work because that's the way it is designed to work.
According to the [SQLite docs](https://sqlite.org/datatype3.html#assigning_collating_sequences_from_sql), whenever 2 strings are compared, the applicable collation sequence is used to compare the 2 strings. The collation sequence used depends on the following rules:
>
> 7.1. Assigning Collating Sequences from SQL
>
>
> Every column of every table has an associated collating function. If no collating function is explicitly defined, then the collating function defaults to BINARY. The COLLATE clause of the column definition is used to define alternative collating functions for a column.
>
>
> The rules for determining which collating function to use for a binary comparison operator (=, <, >, <=, >=, !=, IS, and IS NOT) are as follows:
>
>
> 1. If either operand has an explicit collating function assignment using the postfix COLLATE operator, then the explicit collating function is used for comparison, with precedence to the collating function of the left operand.
> 2. If either operand is a column, then the collating function of that column is used with precedence to the left operand. For the purposes of the previous sentence, a column name preceded by one or more unary "+" operators is still considered a column name.
> 3. Otherwise, the BINARY collating function is used for comparison.
>
>
>
Note that storing date/time values as strings in any other format that does not compare the same lexicographically as chronologically results in date/time comparisons that are usually wrong.
Upvotes: 2 [selected_answer]
|
2018/03/15
| 2,198 | 7,803 |
<issue_start>username_0: I am using Hacker Rank challenges to teach myself BASH, and I'm in need of some advice.
I'm specifically trying to solve this challenge: [Apple and Oranges by nabila\_ahmed](https://www.hackerrank.com/challenges/apple-and-orange/problem)
I need to read in multiple lines of `ints` separated by spaces, on multiple lines. I decided to use `awk` to do this because it seems a lot more efficient in memory storage than using `read`. (I tried a couple of solutions using `read` and they timed out, because the test cases are really big.)
Example input:
```
7 11
5 15
3 2
-2 2 1
5 -6
```
This is my first attempt in bash and it timed out:
```
row=0
while read line || [[ -n $line ]]; do
if [ "$row" -eq 0 ]
then
column=0
for n in $line; do
if [ "$column" -eq 0 ]
then
housePos1=$n
elif [ "$column" -eq 1 ]
then
housePos2=$n
fi
((column++))
done
# Calculate house min and max
if [ "$housePos1" -gt "$housePos2" ]
then
minHousePos=$housePos2
maxHousePos=$housePos1
else
minHousePos=$housePos1
maxHousePos=$housePos2
fi
elif [ "$row" -eq 1 ]
then
column=0
for n in $line; do
if [ "$column" -eq 0 ]
then
appleTreePos=$n
elif [ "$column" -eq 1 ]
then
orangeTreePos=$n
fi
((column++))
done
elif [ "$row" -eq 3 ]
then
applesInHouse=0
for n in $line; do
# Calculate the apple's position
let applePos=$((appleTreePos + n))
# If the apple's position is within the houses position, count it
if [ "$applePos" -ge "$minHousePos" ] && [ "$applePos" -le "$maxHousePos" ]
then
((applesInHouse++))
fi
done
elif [ "$row" -eq 4 ]
then
orangesInHouse=0
for n in $line; do
# Calculate the apple's position
let orangePos=$((orangeTreePos + n))
# If the apple's position is within the houses position, count it
if [ "$orangePos" -ge "$minHousePos" ] && [ "$orangePos" -le "$maxHousePos" ]
then
((orangesInHouse++))
fi
done
fi
((row++))
done
echo "$applesInHouse"
echo "$orangesInHouse"
```
Here is my second attempt in bash, even more of the solutions timed out:
```
x=0;y=0;read -r s t;read -r a b;read -r m n;
for i in `seq 1 $m`; do
if [ "$i" -lt "$m" ]
then
read -d\ z
else
read -r z
fi
if [ "$((a+z))" -ge "$s" ] && \
[ "$((a+z))" -le "$t" ]
then
((x++))
fi
done
for i in `seq 1 $n`; do
if [ "$i" -lt "$n" ]
then
read -d\ z
else
read -r z
fi
if [ "$((b+z))" -ge "$s" ] && \
[ "$((b+z))" -le "$t" ]
then
((y++))
fi
done
echo $x; echo $y
```
Here's where I am at in debugging my solution using `awk`...
```
awk -v RS='[-]?[0-9]+' \
'{
if(word==$1) {
counter++
if(counter==1){
s=RT
}else if(counter==2){
t=RT
}else if(counter==3){
a=RT
}else if(counter==4){
b=RT
}else if(counter==5){
m=RT
}else if(counter==6){
n=RT
}else{
counter2++
if(counter2<=m){
print "Apples:"
print a+RT
print a+RT>=s
print a+RT<=t
applecount++
}
if(counter2>m && counter2<=m+n){
print "Oranges:"
print b+RT
print b+RT>=s
print b+RT<=t
orangecount++
}
}
}else{
counter=1
word=$1
}
}
END {
print "Total Counts:"
print applecount
print orangecount
}
'
```
Here is the output from that script when using the sample input
```
Apples:
3
0
0
Apples:
7
1
0 <-- This is the problem! (7 is less than or equal to 11)
Apples:
6
0
0
Oranges:
20
0
0
Oranges:
9
1
0 <-- This is also a problem! (9 is less than or equal to 11)
Total Counts:
3
2
```
As you can see, I'm getting some of the wrong comparisons...
ANSWER
======
(mostly courtesy of @glenn-jackman)
```
apples_oranges() {
local s t a b m n d
local -a apples oranges
local na=0 nb=0
{
read s t
read a b
read m n
read -a apples
read -a oranges
} < "$1"
for d in "${apples[@]}"; do
(( s <= a+d && a+d <= t )) && ((na++))
done
echo $na
for d in "${oranges[@]}"; do
(( s <= b+d && b+d <= t )) && ((nb++))
done
echo $nb
}
apples_oranges /dev/stdin
```<issue_comment>username_1: >
> why does it still work when column t is defined as TEXT? It seems, according to > this doc, that the datetime will then be stored as a string. Then why does such > a >=, < comparison still work? Is it coincidence or good practice?
>
>
>
Regardless of whether column t is defined as TEXT or TIMESTAMP, the datetime is stored as a string.
>
> Is it just luck that the strings are compared (how with < and >?) correctly here?
>
>
>
Yes. In Python 2, strings are compared by the ASCII values of their characters, from left to right. So if you have two strings, "001" and "010", python compares the first characters: "0" and "0", finds them to be equal, and then compares the next characters: "0" and "1". Because "1" has a greater ASCII value than "0", the string "010" is greater than "001".
If you are worried about different datetime formats, you can use the sqlite's datetime functions (<https://www.sqlite.org/lang_datefunc.html>) to convert between them.
Upvotes: 0 <issue_comment>username_2: The [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) date/time format `YYYY-MM-DD HH:MM::SS` is specifically designed to permit lexicographic sorting that corresponds to chronological sort order. So, comparisons between strings in this format work because that's the way it is designed to work.
According to the [SQLite docs](https://sqlite.org/datatype3.html#assigning_collating_sequences_from_sql), whenever 2 strings are compared, the applicable collation sequence is used to compare the 2 strings. The collation sequence used depends on the following rules:
>
> 7.1. Assigning Collating Sequences from SQL
>
>
> Every column of every table has an associated collating function. If no collating function is explicitly defined, then the collating function defaults to BINARY. The COLLATE clause of the column definition is used to define alternative collating functions for a column.
>
>
> The rules for determining which collating function to use for a binary comparison operator (=, <, >, <=, >=, !=, IS, and IS NOT) are as follows:
>
>
> 1. If either operand has an explicit collating function assignment using the postfix COLLATE operator, then the explicit collating function is used for comparison, with precedence to the collating function of the left operand.
> 2. If either operand is a column, then the collating function of that column is used with precedence to the left operand. For the purposes of the previous sentence, a column name preceded by one or more unary "+" operators is still considered a column name.
> 3. Otherwise, the BINARY collating function is used for comparison.
>
>
>
Note that storing date/time values as strings in any other format that does not compare the same lexicographically as chronologically results in date/time comparisons that are usually wrong.
Upvotes: 2 [selected_answer]
|
2018/03/15
| 872 | 3,422 |
<issue_start>username_0: I'm new to this site (and new to R) so I hope this is the right way to approach my problem.
I searched at this site but couldn't find the answer I'm looking for.
My problem is the following:
I have imported a table from a database into R (it says it's a data frame) and I want to substract the values from a particular columnn (row by row). Thereafter, I'd like to assign these differences to a new column called 'Difference' in the same data frame.
Could anyone please tell me how to do this?
Many thanks,
Arjan<issue_comment>username_1: >
> why does it still work when column t is defined as TEXT? It seems, according to > this doc, that the datetime will then be stored as a string. Then why does such > a >=, < comparison still work? Is it coincidence or good practice?
>
>
>
Regardless of whether column t is defined as TEXT or TIMESTAMP, the datetime is stored as a string.
>
> Is it just luck that the strings are compared (how with < and >?) correctly here?
>
>
>
Yes. In Python 2, strings are compared by the ASCII values of their characters, from left to right. So if you have two strings, "001" and "010", python compares the first characters: "0" and "0", finds them to be equal, and then compares the next characters: "0" and "1". Because "1" has a greater ASCII value than "0", the string "010" is greater than "001".
If you are worried about different datetime formats, you can use the sqlite's datetime functions (<https://www.sqlite.org/lang_datefunc.html>) to convert between them.
Upvotes: 0 <issue_comment>username_2: The [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) date/time format `YYYY-MM-DD HH:MM::SS` is specifically designed to permit lexicographic sorting that corresponds to chronological sort order. So, comparisons between strings in this format work because that's the way it is designed to work.
According to the [SQLite docs](https://sqlite.org/datatype3.html#assigning_collating_sequences_from_sql), whenever 2 strings are compared, the applicable collation sequence is used to compare the 2 strings. The collation sequence used depends on the following rules:
>
> 7.1. Assigning Collating Sequences from SQL
>
>
> Every column of every table has an associated collating function. If no collating function is explicitly defined, then the collating function defaults to BINARY. The COLLATE clause of the column definition is used to define alternative collating functions for a column.
>
>
> The rules for determining which collating function to use for a binary comparison operator (=, <, >, <=, >=, !=, IS, and IS NOT) are as follows:
>
>
> 1. If either operand has an explicit collating function assignment using the postfix COLLATE operator, then the explicit collating function is used for comparison, with precedence to the collating function of the left operand.
> 2. If either operand is a column, then the collating function of that column is used with precedence to the left operand. For the purposes of the previous sentence, a column name preceded by one or more unary "+" operators is still considered a column name.
> 3. Otherwise, the BINARY collating function is used for comparison.
>
>
>
Note that storing date/time values as strings in any other format that does not compare the same lexicographically as chronologically results in date/time comparisons that are usually wrong.
Upvotes: 2 [selected_answer]
|
2018/03/15
| 319 | 1,118 |
<issue_start>username_0: The question seems easy, but according to my research, the maven repository <http://repo.maven.apache.org/maven2/org/primefaces/primefaces/> has only major releases 5.2,5.3,...,6,2
what I want exactly is to use the version **5.2.9** in order to correct the problem of the reCAPTCHA by implementing its v2, the use of the version 5.3 will have much impact on the developed application.<issue_comment>username_1: X.Y.Z releases are only available for paying customers as an [elite release](https://www.primefaces.org/support/). You can then download the jar in a portal and put it in your own maven repo. They are not available in any public repository.
What you could try to do is to compare the sources of the 5.2 and 5.3 release regarding the captcha and just backport those. Still, really upgrading (to 6.2) is a better choice
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you have luck, maybe 5.3.RC1, which implements reCAPTCHA v2, is close to 5.2.9. You can get 5.3.RC1 via primefaces repo: <https://repository.primefaces.org/org/primefaces/primefaces/5.3.RC1/>
Upvotes: -1
|
2018/03/15
| 1,488 | 3,683 |
<issue_start>username_0: I have an sql query that counts consecutive days but i need it to count weekends to. For example, if someone has a friday and a monday off i need this to count as 2 consecutive days if that makes sense.
tables:
```
CREATE TABLE Absence(
Date Date,
Code varchar(10),
Name varchar(10),
Type varchar(10)
);
INSERT INTO Absence (Date, Code, Name, Type)
VALUES ('01-10-18', 'S', 'Sam', 'Sick'),
('01-11-18','S', 'Sam', 'Sick'),
('01-12-18','S', 'Sam', 'Sick'),
('01-21-18','S', 'Sam', 'Sick'),
('01-26-18','S', 'Sam', 'Sick'),
('01-27-18','S', 'Sam', 'Sick'),
('02-12-18','S', 'Sam', 'Holiday'),
('02-13-18','S', 'Sam', 'Holiday'),
('02-18-18','S', 'Sam', 'Holiday'),
('02-25-18','S', 'Sam', 'Holiday'),
('02-10-18','S', 'Sam', 'Holiday'),
('02-13-18','F', 'Fred', 'Sick'),
('02-14-18','F', 'Fred', 'Sick'),
('03-09-18','F', 'Fred', 'Sick'),
('03-12-18','F', 'Fred', 'Sick'),
('02-28-18','F', 'Fred', 'Sick');
```
I have this code:
```
select name, min(date), max(date), count(*) as numdays, type
from (select a.*,
row_number() over (partition by name, type order by date) as
seqnum_ct
from absence a
) a
group by name, type, dateadd(day, -seqnum_ct, date);
```
And it produces this result:
```
| name | | | numdays | type |
|------|------------|------------|---------|---------|
| Fred | 2018-02-13 | 2018-02-14 | 2 | Sick |
| Fred | 2018-02-28 | 2018-02-28 | 1 | Sick |
| Fred | 2018-03-09 | 2018-03-09 | 1 | Sick |
| Fred | 2018-03-12 | 2018-03-12 | 1 | Sick |
| Sam | 2018-02-10 | 2018-02-10 | 1 | Holiday |
| Sam | 2018-02-12 | 2018-02-13 | 2 | Holiday |
| Sam | 2018-02-18 | 2018-02-18 | 1 | Holiday |
| Sam | 2018-02-25 | 2018-02-25 | 1 | Holiday |
| Sam | 2018-01-10 | 2018-01-12 | 3 | Sick |
| Sam | 2018-01-21 | 2018-01-21 | 1 | Sick |
| Sam | 2018-01-26 | 2018-01-27 | 2 | Sick |
```
If you look at these lines
```
('03-09-18','F', 'Fred', 'Sick'),
('03-12-18','F', 'Fred', 'Sick'),
```
This should equal 1 consecutive period even though it is a Friday and a Monday if this make sense. How can i edit this code so that it includes weekends to?
Thanks
SQL fiddle - <http://sqlfiddle.com/#!18/1de27/1><issue_comment>username_1: You can use a running sum to create groups to handle weekends. All you need to check is the current row's weekday is 2 (for Monday) and the previous row's is 6 (for Friday), for a given name,type in date order.
```
select name, min(date), max(date), count(*) as numdays, type
from (select a.*,sum(col) over(partition by Name,type order by [Date]) as grp
from (select a.*,
case when datediff(day,lag([Date]) over(partition by Name,type order by [Date]),[Date])=1 or
(datepart(weekday,[Date])=2 and datepart(weekday,lag([Date]) over(partition by Name,type order by [Date]))=6)
then 0 else 1 end as col
from absence a
) a
) a
group by name, type, grp
```
Upvotes: 0 <issue_comment>username_2: Try this:
```
select name, min(date), max(date), count(*) as numdays, type
from (
select date, code, name, type, seqnum_ct + sum(weekend) over (partition by name, type order by date) seqnum_ct
from (select a.*,
row_number() over (partition by name, type order by date) as seqnum_ct,
case when datepart(weekday, [date]) = 2 and
datepart(weekday, lag([date]) over (partition by name, type order by date)) = 6 then 2 else 0 end [weekend]
from #absence a
) a
) a
group by name, type, dateadd(day, -seqnum_ct, date);
```
Upvotes: 2 [selected_answer]
|
2018/03/15
| 5,404 | 10,044 |
<issue_start>username_0: I am trying to order a variable in R which is a list of file names that contains three substrings that I want to order on. The files names are formatted as such:
```
MAF001.incMHC.zPGS.S1
MAF002.incMHC.zPGS.S1
MAF003.incMHC.zPGS.S1
MAF001.incMHC.zPGS.S2
MAF002.incMHC.zPGS.S2
MAF003.incMHC.zPGS.S2
MAF001.noMHC_incRS148.zPGS.S1
MAF002.noMHC_incRS148.zPGS.S1
MAF003.noMHC_incRS148.zPGS.S1
MAF001.noMHC_incRS148.zPGS.S2
MAF002.noMHC_incRS148.zPGS.S2
MAF003.noMHC_incRS148.zPGS.S2
MAF001.noMHC.zPGS.S1
MAF002.noMHC.zPGS.S1
MAF003.noMHC.zPGS.S1
MAF001.noMHC.zPGS.S2
MAF002.noMHC.zPGS.S2
MAF003.noMHC.zPGS.S2
```
I want to order this list firstly on MAF substring, then MHC substring, then S substring, such that the order is:
```
MAF001.incMHC.zPGS.S1
MAF001.noMHC_incRS148.zPGS.S1
MAF001.noMHC.zPGS.S1
MAF001.incMHC.zPGS.S2
MAF001.noMHC_incRS148.zPGS.S2
MAF001.noMHC.zPGS.S2
MAF002.incMHC.zPGS.S1
MAF002.noMHC_incRS148.zPGS.S1
MAF002.noMHC.zPGS.S1
MAF002.incMHC.zPGS.S2
MAF002.noMHC_incRS148.zPGS.S2
MAF002.noMHC.zPGS.S2
MAF003.incMHC.zPGS.S1
MAF003.noMHC_incRS148.zPGS.S1
MAF003.noMHC.zPGS.S1
MAF003.incMHC.zPGS.S2
MAF003.noMHC_incRS148.zPGS.S2
MAF003.noMHC.zPGS.S2
```
I have had a play around with gsub after seeing the answer to this question regarding a single substring:
[R Sort strings according to substring](https://stackoverflow.com/questions/30542690/r-sort-strings-according-to-substring)
But I am not sure how to extend this idea to multiple substrings (of mixed character and numerical classes) within a string.<issue_comment>username_1: This result match your desired output, but it only sorts according to `MAF` and `S`. I didn't understand how to use `MHC` string for sorting, please elaborate a bit on that part if this answer doesn't meet your needs.
```
library(stringr)
maf <- str_extract(filenames, "MAF\\d+\\.")
mhc <- str_extract(filenames, "\\..*MHC.*\\.")
s <- str_extract(filenames, "S\\d+$")
library(magrittr)
library(dplyr)
data.frame(filenames, maf, mhc, s) %>%
arrange(maf, s) %>%
select(filenames)
```
the output is:
```
filenames
1 MAF001.incMHC.zPGS.S1
2 MAF001.incMHC.zPGS.S2
3 MAF001.noMHC.zPGS.S1
4 MAF001.noMHC.zPGS.S2
5 MAF001.noMHC_incRS148.zPGS.S1
6 MAF001.noMHC_incRS148.zPGS.S2
7 MAF002.incMHC.zPGS.S1
8 MAF002.incMHC.zPGS.S2
9 MAF002.noMHC.zPGS.S1
10 MAF002.noMHC.zPGS.S2
11 MAF002.noMHC_incRS148.zPGS.S1
12 MAF002.noMHC_incRS148.zPGS.S2
13 MAF003.incMHC.zPGS.S1
14 MAF003.incMHC.zPGS.S2
15 MAF003.noMHC.zPGS.S1
16 MAF003.noMHC.zPGS.S2
17 MAF003.noMHC_incRS148.zPGS.S1
18 MAF003.noMHC_incRS148.zPGS.S2
```
where `filenames` is
```
filenames <- read.table(text="MAF001.incMHC.zPGS.S1
MAF002.incMHC.zPGS.S1
MAF003.incMHC.zPGS.S1
MAF001.incMHC.zPGS.S2
MAF002.incMHC.zPGS.S2
MAF003.incMHC.zPGS.S2
MAF001.noMHC_incRS148.zPGS.S1
MAF002.noMHC_incRS148.zPGS.S1
MAF003.noMHC_incRS148.zPGS.S1
MAF001.noMHC_incRS148.zPGS.S2
MAF002.noMHC_incRS148.zPGS.S2
MAF003.noMHC_incRS148.zPGS.S2
MAF001.noMHC.zPGS.S1
MAF002.noMHC.zPGS.S1
MAF003.noMHC.zPGS.S1
MAF001.noMHC.zPGS.S2
MAF002.noMHC.zPGS.S2
MAF003.noMHC.zPGS.S2", header=FALSE, stringsAsFactors=FALSE)
```
Upvotes: 1 <issue_comment>username_2: Here's a one-liner in base R:
```
bar <- foo[order(sapply(strsplit(foo, "\\."), function(x) paste(x[1], x[4])))]
head(data.frame(result = bar), 10)
result
1 MAF001.incMHC.zPGS.S1
2 MAF001.noMHC_incRS148.zPGS.S1
3 MAF001.noMHC.zPGS.S1
4 MAF001.incMHC.zPGS.S2
5 MAF001.noMHC_incRS148.zPGS.S2
6 MAF001.noMHC.zPGS.S2
7 MAF002.incMHC.zPGS.S1
8 MAF002.noMHC_incRS148.zPGS.S1
9 MAF002.noMHC.zPGS.S1
10 MAF002.incMHC.zPGS.S2
```
Explanation:
* Split string by `.` using `strsplit`: `strsplit(foo, "\\.")`
* Extract and combine elements 1 and 4: `paste(x[1], x[4])`
* Get order of all combinations using `order`
* Get corresponding value from `foo[]`
---
Data (`foo`):
```
c("MAF001.incMHC.zPGS.S1", "MAF002.incMHC.zPGS.S1", "MAF003.incMHC.zPGS.S1",
"MAF001.incMHC.zPGS.S2", "MAF002.incMHC.zPGS.S2", "MAF003.incMHC.zPGS.S2",
"MAF001.noMHC_incRS148.zPGS.S1", "MAF002.noMHC_incRS148.zPGS.S1",
"MAF003.noMHC_incRS148.zPGS.S1", "MAF001.noMHC_incRS148.zPGS.S2",
"MAF002.noMHC_incRS148.zPGS.S2", "MAF003.noMHC_incRS148.zPGS.S2",
"MAF001.noMHC.zPGS.S1", "MAF002.noMHC.zPGS.S1", "MAF003.noMHC.zPGS.S1",
"MAF001.noMHC.zPGS.S2", "MAF002.noMHC.zPGS.S2", "MAF003.noMHC.zPGS.S2"
)
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Using `tidyr` and `dplyr`:
```
library(tidyr)
library(dplyr)
df <- data.frame(filenames = c(...))
pattern = "^([^.]+)\\.([^.]+)"
df %>%
extract(filenames,
into = c("maf", "mhc"),
regex = pattern, remove = FALSE) %>%
arrange(maf, mhc) %>%
select(filenames)
```
Which yields
```
filenames
1 MAF001.incMHC.zPGS.S1
2 MAF001.incMHC.zPGS.S2
3 MAF001.noMHC.zPGS.S1
4 MAF001.noMHC.zPGS.S2
5 MAF001.noMHC_incRS148.zPGS.S1
6 MAF001.noMHC_incRS148.zPGS.S2
7 MAF002.incMHC.zPGS.S1
8 MAF002.incMHC.zPGS.S2
9 MAF002.noMHC.zPGS.S1
10 MAF002.noMHC.zPGS.S2
11 MAF002.noMHC_incRS148.zPGS.S1
12 MAF002.noMHC_incRS148.zPGS.S2
13 MAF003.incMHC.zPGS.S1
14 MAF003.incMHC.zPGS.S2
15 MAF003.noMHC.zPGS.S1
16 MAF003.noMHC.zPGS.S2
17 MAF003.noMHC_incRS148.zPGS.S1
18 MAF003.noMHC_incRS148.zPGS.S2
```
Upvotes: 2 <issue_comment>username_4: Many good solutions have already been added here. I'm adding another one which is based on use of `vector` only.
**Note:** `OP` intended to sort on `MAF`, `MHC` and `S` substrings. I have stick with that rule to sort all three. Hence result of my answer may not match with other answers.
The approach:
1. Use `regmatches` to find substrings per description in OP
2. Use `paste` to prepare strings based on which `sort` can be performed
3. Set names of vector using `setNames`
4. Sort `vector` on name.
```
v[order(names(setNames(v,
paste(regmatches(v, regexpr("^MAF\\d+", v, perl = TRUE)),
regmatches(v, regexpr("\\w*MHC\\w*", v, perl = TRUE)),
regmatches(v, regexpr("\\w+\\d+$", v, perl = TRUE))
))))]
#Result
[1] "MAF001.incMHC.zPGS.S1"
[2] "MAF001.incMHC.zPGS.S2"
[3] "MAF001.noMHC.zPGS.S1"
[4] "MAF001.noMHC.zPGS.S2"
[5] "MAF001.noMHC_incRS148.zPGS.S1"
[6] "MAF001.noMHC_incRS148.zPGS.S2"
[7] "MAF002.incMHC.zPGS.S1"
[8] "MAF002.incMHC.zPGS.S2"
[9] "MAF002.noMHC.zPGS.S1"
[10] "MAF002.noMHC.zPGS.S2"
[11] "MAF002.noMHC_incRS148.zPGS.S1"
[12] "MAF002.noMHC_incRS148.zPGS.S2"
[13] "MAF003.incMHC.zPGS.S1"
[14] "MAF003.incMHC.zPGS.S2"
[15] "MAF003.noMHC.zPGS.S1"
[16] "MAF003.noMHC.zPGS.S2"
[17] "MAF003.noMHC_incRS148.zPGS.S1"
[18] "MAF003.noMHC_incRS148.zPGS.S2"
```
**data**
```
v <- c("MAF001.incMHC.zPGS.S1", "MAF001.noMHC_incRS148.zPGS.S1", "MAF001.noMHC.zPGS.S1",
"MAF001.incMHC.zPGS.S2", "MAF001.noMHC_incRS148.zPGS.S2", "MAF001.noMHC.zPGS.S2",
"MAF002.incMHC.zPGS.S1", "MAF002.noMHC_incRS148.zPGS.S1", "MAF002.noMHC.zPGS.S1",
"MAF002.incMHC.zPGS.S2", "MAF002.noMHC_incRS148.zPGS.S2", "MAF002.noMHC.zPGS.S2",
"MAF003.incMHC.zPGS.S1", "MAF003.noMHC_incRS148.zPGS.S1", "MAF003.noMHC.zPGS.S1",
"MAF003.incMHC.zPGS.S2", "MAF003.noMHC_incRS148.zPGS.S2", "MAF003.noMHC.zPGS.S2"
)
```
Upvotes: 0 <issue_comment>username_5: I have a function designed especially for such a task:
**function**
```
reg_sort <- function(x,...,verbose=F) {
ellipsis <- sapply(as.list(substitute(list(...)))[-1], deparse, simplify="array")
reg_list <- paste0(ellipsis, collapse=',')
reg_list %<>% strsplit(",") %>% unlist %>% gsub("\\\\","\\",.,fixed=T)
pattern <- reg_list %>% map_chr(~sub("^-\\\"","",.) %>% sub("\\\"$","",.) %>% sub("^\\\"","",.) %>% trimws)
descInd <- reg_list %>% map_lgl(~grepl("^-\\\"",.)%>%as.logical)
reg_extr <- pattern %>% map(~str_extract(x,.)) %>% c(.,list(x)) %>% as.data.table
reg_extr[] %<>% lapply(., function(x) type.convert(as.character(x), as.is = TRUE))
map(rev(seq_along(pattern)),~{reg_extr<<-reg_extr[order(reg_extr[[.]],decreasing = descInd[.])]})
if(verbose) { tmp<-lapply(reg_extr[,.SD,.SDcols=seq_along(pattern)],unique);names(tmp)<-pattern;tmp %>% print }
return(reg_extr[[ncol(reg_extr)]])
}
```
**data:**
```
vec <- c("MAF001.incMHC.zPGS.S1", "MAF002.incMHC.zPGS.S1", "MAF003.incMHC.zPGS.S1",
"MAF001.incMHC.zPGS.S2", "MAF002.incMHC.zPGS.S2", "MAF003.incMHC.zPGS.S2",
"MAF001.noMHC_incRS148.zPGS.S1", "MAF002.noMHC_incRS148.zPGS.S1",
"MAF003.noMHC_incRS148.zPGS.S1", "MAF001.noMHC_incRS148.zPGS.S2",
"MAF002.noMHC_incRS148.zPGS.S2", "MAF003.noMHC_incRS148.zPGS.S2",
"MAF001.noMHC.zPGS.S1", "MAF002.noMHC.zPGS.S1", "MAF003.noMHC.zPGS.S1",
"MAF001.noMHC.zPGS.S2", "MAF002.noMHC.zPGS.S2", "MAF003.noMHC.zPGS.S2"
)
```
**call:**
```
reg_sort(x=vec, "^.*?(?=\\.)","(?<=\\.).*(?<=\\.S)","S\\d+$")
```
**result:** (a character vector)
```
1 MAF001.incMHC.zPGS.S1
2 MAF001.incMHC.zPGS.S2
3 MAF001.noMHC.zPGS.S1
4 MAF001.noMHC.zPGS.S2
5 MAF001.noMHC_incRS148.zPGS.S1
6 MAF001.noMHC_incRS148.zPGS.S2
7 MAF002.incMHC.zPGS.S1
8 MAF002.incMHC.zPGS.S2
9 MAF002.noMHC.zPGS.S1
10 MAF002.noMHC.zPGS.S2
11 MAF002.noMHC_incRS148.zPGS.S1
12 MAF002.noMHC_incRS148.zPGS.S2
13 MAF003.incMHC.zPGS.S1
14 MAF003.incMHC.zPGS.S2
15 MAF003.noMHC.zPGS.S1
16 MAF003.noMHC.zPGS.S2
17 MAF003.noMHC_incRS148.zPGS.S1
18 MAF003.noMHC_incRS148.zPGS.S2
```
**other features are:**
* Sort descending: (add `-` infront) `reg_sort(x=vec, -"^.*?(?=\\.)","(?<=\\.).*(?<=\\.S)",-"S\\d+$")`
* Verbose mode: `reg_sort(x=vec, "^.*?(?=\\.)","(?<=\\.).*(?<=\\.S)","S\\d+$",verbose=T)` (see/check what the regEx pattern has extracted in order to sort)
Upvotes: 0
|
2018/03/15
| 553 | 2,145 |
<issue_start>username_0: I want to select a window from the list of returned windows title in robotFramework, the code is the below:
```
Partager sur Facebook
${Window1Title}= Get Window Titles
Run Keyword If '${Window1Title}[]' == 'Facebook' ConnexionAndPartageFacebook
Run Keyword If '${Window1Title}' == 'Publier sur Facebook' PartageFacebook
```
It give me the error:
```
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 28: ordinal not in range(128)
```
How can i select Window 2 from the returned window titles ?<issue_comment>username_1: This can be done by referring to the item in the list. More information can be found in the [chapter on Variables](http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#variables) of excellent Robot Framework Userguide:
```
Run Keyword If '${Window1Title[1]}' == 'Facebook' ConnexionAndPartageFacebook
```
Upvotes: 0 <issue_comment>username_2: Below script will select the "Home" window
```
@{Window_List} List Windows
${Win_Index}= Get Index From List ${Window_List} Home
Run Keyword And Continue On Failure Wait Until Keyword Succeeds ${Timeout_20s} ${Timeout_2s}
... Select Window ${Window_List[${Win_Index}]}
```
This can be used as common script to select window by passing window name as parameters
Upvotes: 1 <issue_comment>username_3: This is an old post, but thought will provide a tip here. It depends on the version of Selenium library being used, the get window titles works perfectly fine in selenium2library but has issue with seleniumlibrary, if the window titles has a special character in them. Look at this issue for more details : <https://github.com/robotframework/SeleniumLibrary/issues/1252>. This issue has been fixed and works fine in seleniumlibrary.
Back to the question, this is a simple way of choosing the window that is needed:
```
@{Windowtitles} Get Window Titles
${windowtoopen}= Get From List ${Windowtitles} -1
Select Window Title=${windowtoopen}
```
This would allow you to choose the window.
Upvotes: 2
|
2018/03/15
| 209 | 864 |
<issue_start>username_0: Cannot build android project from Github the error is
" Gradle sync failed: Cause: error=0, spawn failed
Consult IDE log for more details (Help | Show Log) (434ms)
any ideas ?<issue_comment>username_1: try to restart android studio, it worked for me. Gradle back to normal.
Upvotes: 3 <issue_comment>username_2: This error may cause due to changing of packages and code, I had a same problem, try Debugging your code first instead of directly jumps to running, its worked for me.
Upvotes: 0 <issue_comment>username_3: To restart Android Studio sometimes does **not** **enough**. You should close it and restart your computer. After that you can open Android Studio without error.
Upvotes: 1 <issue_comment>username_4: Reinstall your IDE. This happened to me because I accidentally deleted the IDE, so I had to reinstall it.
Upvotes: -1
|
2018/03/15
| 650 | 2,291 |
<issue_start>username_0: I am trying to use cache in cakePHP3 to store query results.
I declared a cache adapter named "bl"
config/app.php :
```
/**
* Configure the cache adapters.
*/
'Cache' => [
'default' => [
'className' => 'File',
'path' => CACHE,
'url' => env('CACHE_DEFAULT_URL', null),
],
'bl' => [
'className' => 'File',
'path' => CACHE . 'bl/',
'url' => env('CACHE_DEFAULT_URL', null),
'duration' => '+1 week',
],
```
src/Controller/UsersController.php :
```
use Cake\Cache\Cache;
...
public function test()
{
$this->autoRender = false;
$this->loadModel('Users');
$Users = $this->Users->find('all');
$Users->cache('test', 'bl');
debug(Cache::read('test', 'bl'));
}
```
The debug return "false".
tmp/cache/bl/ directory were well created, but no cache files were generated.
Am I missing something ?<issue_comment>username_1: You are not calling the proper method, you need to use Cache::write() not Users->cache(). I updated your code below:
```
use Cake\Cache\Cache;
...
public function test()
{
$this->autoRender = false;
$this->loadModel('Users');
$Users = $this->Users->find('all');
Cache::write('cache_key_name', $Users, 'bl');
debug(Cache::read('cache_key_name', 'bl'));
}
```
See <https://book.cakephp.org/3.0/en/core-libraries/caching.html#writing-to-a-cache>
Upvotes: 0 <issue_comment>username_2: Your query is never being executed, hence it's never going to be cached. Run the query by invoking `all()`, or `toArray()`, or by iterating over it, etc...
See also
* **[Cookbook > Database Access & ORM > Query Builder > How Are Queries Lazily Evaluated](https://book.cakephp.org/3.0/en/orm/query-builder.html#how-are-queries-lazily-evaluated)**
Upvotes: 1 <issue_comment>username_3: I was able to find the solution with your 2 answers, the final code is :
```
public function test()
{
$this->autoRender = false;
$users = $this->Users->find('all')->toArray();
Cache::write('test_cache', $users, 'bl');
debug(Cache::read('test_cache', 'bl'));
}
```
Upvotes: 0
|
2018/03/15
| 1,647 | 6,027 |
<issue_start>username_0: I have a multi-step form that has a "next" and "back" button on each step of the form. I'm using jQuery to enable the "next" button once the criteria for each section is met. For example: at least one checkbox is checked or a radio button is selected.
I'm having an issue where after completing a number of sections, I go back to a previous section and uncheck all checkboxes and the "next" button remains enabled.
There's a Codepen here of a rough version of what I'm working with - note all sections are visible to show how the button remains enabled once you begin checking/unchecking: <https://codepen.io/abbasarezoo/pen/jZgQOV>
My current code:
```
1: Select multiple answers
--------------------------
Checkbox 1
Checkbox 2
Checkbox 3
Next
2: Select multiple answers
--------------------------
Checkbox 1
Checkbox 2
Checkbox 3
Next
3: Select one answer
--------------------
Radio 1
Radio 2
Radio 3
Next
Previous
4: Select one answer per row
----------------------------
### Row 1
Radio 1
Radio 2
Radio 3
### Row 2
Radio 1
Radio 2
Radio 3
Next
Previous
```
JS:
```
var $panelsInput = $('.panels input'),
$rowsInput = $('.rows input');
$panelsInput.click(function () {
if ($('.panels input:checked').length >= 1) {
$(this).closest('.panels').find('.next-q').prop('disabled', false);
}
else {
$(this).closest('.panels').find('.next-q').prop('disabled', true);
}
});
$rowsInput.click(function () {
var radioLength = $('.radio-row').length;
if ($('.rows input:checked').length == radioLength) {
$('.rows .next-q').prop('disabled', false);
}
else {
$('.rows .next-q').prop('disabled', true);
}
});
```
Is there any way to make this work?<issue_comment>username_1: Please see below comment in `$panelsInput.click(function (){});`, you need to get the `checked` count for **current panel** (the user clicks), instead of all.
So the comparasion in your codes:
`$('.panels input:checked').length >= 1`
need to change to:
`$(this).parent().find('input:checked').length >= 1`
```js
var $panelsInput = $('.panels input'),
$rowsInput = $('.rows input');
$panelsInput.click(function () {
//get current input, find out its parent, then get the count of checked
if ($(this).parent().find('input:checked').length >= 1) {
$(this).closest('.panels').find('.next-q').prop('disabled', false);
}
else {
$(this).closest('.panels').find('.next-q').prop('disabled', true);
}
});
$rowsInput.click(function () {
var radioLength = $('.radio-row').length;
if ($('.rows input:checked').length == radioLength) {
$('.rows .next-q').prop('disabled', false);
}
else {
$('.rows .next-q').prop('disabled', true);
}
});
```
```html
1: Select multiple answers
--------------------------
Checkbox 1
Checkbox 2
Checkbox 3
Next
2: Select multiple answers
--------------------------
Checkbox 1
Checkbox 2
Checkbox 3
Next
3: Select one answer
--------------------
Radio 1
Radio 2
Radio 3
Next
Previous
4: Select one answer per row
----------------------------
### Row 1
Radio 1
Radio 2
Radio 3
### Row 2
Radio 1
Radio 2
Radio 3
Next
Previous
```
Upvotes: 1 <issue_comment>username_2: when you select the input to see if is checked, you're selecting all inputs
`if ($('.panels input:checked').length >= 1) {`
you need to select just the inputs from the panel the user clicked
`if ($(this).closest('.panels').find('input:checked').length >= 1) {`
<https://codepen.io/spacedog4/pen/YaWqdo?editors=1010>
Upvotes: 2 [selected_answer]<issue_comment>username_3: I thought it is interesting to handle interactions using [event delegation](https://learn.jquery.com/events/event-delegation/) at the `form` level which is more flexible:
* Only one handler is loaded in memory. (By the way only a single scope is in charge of the logic behind the prev/next behavior).
* You can add dynamically panels to the form (with the same markup structure) and buttons will work right away without requiring another listener registering step.
```js
var $panelsInput = $('.panels input')
, $rowsInput = $('.rows input')
;
$('form').on('click', function (e) {
let $t = $(this)
, $target = $(e.target)
, $fieldset = $target.closest('fieldset')
, $rows = $fieldset.find('.radio-row')
;
// When a button is clicked
if ($target.is('button')) {
// Next button
if ($target.is('.next-q')) {
$fieldset.addClass('hide').next().addClass('show');
// Prev button
} else {
// Untick boxes
$fieldset.find('input').prop('checked', false).end()
// Disable appropriate button
.find('.next-q').prop('disabled', true).end()
.prev().removeClass('hide').nextAll().removeClass('show');
}
// When a checkbox is clicked
} else if ($target.is('input')) {
let $containers = ($rows.length ? $rows : $fieldset)
, containersHavingAtickedBox = $containers.filter(function() {
return !!$(this).find('input:checked').length
})
, shouldEnable = $containers.length === containersHavingAtickedBox.length
;
$fieldset.find('.next-q').prop('disabled', !shouldEnable);
}
});
```
```css
fieldset ~ fieldset, .hide{display:none}
fieldset.show:not(.hide){display: block}
```
```html
1: Select multiple answers
--------------------------
Checkbox 1
Checkbox 2
Checkbox 3
Next
2: Select multiple answers
--------------------------
Checkbox 1
Checkbox 2
Checkbox 3
Next
3: Select one answer
--------------------
Radio 1
Radio 2
Radio 3
Next
Previous
4: Select one answer per row
----------------------------
### Row 1
Radio 1
Radio 2
Radio 3
### Row 2
Radio 1
Radio 2
Radio 3
Next
Previous
```
Upvotes: 0
|
2018/03/15
| 518 | 2,000 |
<issue_start>username_0: I have a menu button with a click event in Vue. When the button is clicked, it's supposed to activate the menu itself. This is the parent element which when clicked activates the menu (the toggleMenu function makes `menuIsActive` true). This part works fine:
```
```
And this is the `app-navmenu` component that gets rendered:
```
Title
=====
info
meta info
```
The problem I am running into is that I don't want the menu to disappear when I click on the actual `navbar-dropdown` div element, hence why I have a `@click.stop`. However, I do want the menu to disappear when I click on a `router-link` element, but since I have `@click.stop` in the `navbar-dropdown` element, the menu persists. If I don't have a `@click.stop` event on the `navbar-dropdown` element, then the menu disappears as soon as the `navbar-dropdown` element is clicked on, which I don't want.
How can I have the menu persist when clicking on the dropdown body, but also have it disappear when I click on a router-link? I've tried other click methods like `.self` and `.prevent`, but those don't seem to do what I need.<issue_comment>username_1: This is what I ended up doing to fix my issue. First, I moved the click function in the parent to one of its children:
```
```
This lets the body of the menu stay active even when I click on the body without having to use a `@click.stop`. Then in the menu itself, I did this so that links will close the menu:
```
Title
=====
info
meta info
```
One strange behavior I noticed is that if I put the `@click="toggleMenu"` function in the element itself, it doesn't get called on, even if I use `.prevent`. Hence the need for the div wrapper around the router-link element.
Upvotes: 0 <issue_comment>username_2: I am exactly not sure with your requirement, but following your comment, you can use something like this:
This will even push to the router link:
```
```
This will prevent to go to the router link:
```
```
Upvotes: 2
|
2018/03/15
| 155 | 533 |
<issue_start>username_0: My code not working. I want use `getElementById` with focus, next input.
```
* t1
* t2
function nextinput1(){
document.getElementById("kkInput").focus();
}
```<issue_comment>username_1: You forgot to close the function, here is the correct code:
```
* t1
* t2
function nextinput1(){
document.getElementById("kkInput").focus();
}
```
Upvotes: -1 <issue_comment>username_2: ```
style="display: none;
```
You need to remove this style from the parent ul tag. Not sure why that's there.
Upvotes: 0
|
2018/03/15
| 868 | 3,236 |
<issue_start>username_0: I noticed that if you allocate a `char` `array` inside of a function like this
```
void example()
{
char dataBuffer[100] = { 0 };
}
```
then examine the disassembly of the function with IDA that this actually inserts a call to `memset()` in order to initialize the `char` `array`. Looks something like this after I reversed it
```
memset(stackPointer + offset + 1, 0, 100);
```
The raw assembly looks like
```
addic r3, r1, 0x220
addic r3, r3, 1
clrldi r3, r3, 32
li r4, 0
li r5, 0x64
bl memset
```
But If I were to change the `example()` function to
```
void example()
{
char dataBuffer[100];
}
```
Then the call to `memset()` is not inserted I noticed when examining the disassembly in IDA.
So basically my question is, if the `char` `array` is not initialized to zero will it still be safe to work with? For example
```
void example()
{
char dataBuffer[100];
strcpy(dataBuffer, "Just some random text");
strcat(dataBuffer, "blah blah blah example text\0");//null terminator probably not required as strcpy() appends null terminator and strcat() moves the null terminator to end of string. but whatever
}
```
Should I expect any UB when writing/reading to the `char` `array` like this even when it is not initialized to zero with the inserted `memset()` that comes along with initializing the char array with `= { 0 }`?<issue_comment>username_1: It's perfectly safe to work with it as an array with garbage data. This means *writing* into it is safe, *reading* from it is not. You simply just don't know what is in it yet. The function `strcpy` doesn't read from the array it gets (or more specifically, from the pointer it gets) it just writes onto it. So it's safe.
After you are done with writing into your char buffer. When you come to use it, you are going to go through it until you encounter a null (0) character. That null character will be set there when you wrote into it last. After that null character comes garbage if you didn't initialize it, and comes 0's if you did. In both cases, it doesn't matter since you are not going to read past the null character.
See: <http://www.cplusplus.com/reference/cstring/strcpy/>
it uses a very similar example to the code you provided.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The line
```
char dataBuffer[100];
```
calls the variable `dataBuffer` into existence, and thus also associates memory with it. However, as an optimization, this memory is not initialized. C is designed not to perform any unnecessary work, and you are working in the C subset of C++ here.
That said, if your compiler can prove that you don't actually use the memory, it does not need to allocate it. But such an optimization would not be detectable from within your running, standard compliant code by definition. Your code will run **as if** the memory had been allocated. (This as-if rule is the basis for pretty much all optimizations that your compiler is allowed to perform.)
Your `strcpy()` and `strcat()` calls are fine, as they do not overrun the allocated buffer. But better forget that `strcpy()` and `strcat()` exist, there are better, safer functions to use nowadays.
Upvotes: 2
|
2018/03/15
| 710 | 2,693 |
<issue_start>username_0: I have a method that checks the `Type` of an object to determine if it is complex:
```
private static bool IsComplexObject(Type type)
{
if (IsNullable(type))
{
// nullable type, check if the nested type is simple
return IsComplexObject(type.GetGenericArguments()[0]);
}
if (type.Equals(typeof(string)))
{
return false;
}
if (type.Equals(typeof(decimal)))
{
return false;
}
if (type.Equals(typeof(DataTable)))
{
return false;
}
if (type.IsValueType)
{
return false;
}
if (type.IsPrimitive)
{
return false;
}
if (type.IsEnum)
{
return false;
}
return true;
}
```
The trouble is: when I have a `Type` of a simple type array, `Int32[]` for example, my method returns `true`.
I can prevent this from happening by adding this `if` statement into my method:
```
if (type.IsArray)
{
return false;
}
```
The problem is that this `if` statement then prevents actual complex objects from being identified. For example, the following setup determines a custom class to not be complex:
```
public class TestClass
{
public void TestComplexArray()
{
var result = IsComplexObject(typeof(MyComplexClass[]));
// result == false
}
}
public class MyComplexClass
{
public string Name { get; set; }
public string Id { get; set; }
}
```
So my question is: how can I check for the complexity of an array's value type to separate `Int32[]` from `MyComplexClass[]`?<issue_comment>username_1: Perhaps you want to check the array rank, and recur on the element type?
```
if (type.IsArray)
{
if (type.GetArrayRank() != 1)
{
return true;
}
Type elementType = type.GetElementType();
if (elementType.IsArray)
{
return true;
}
return IsComplexType(elementType);
}
```
Upvotes: 1 <issue_comment>username_2: Try retrieving the element type then calling `IsComplexObject` recursively, like so:
```
if (type.IsArray) return IsComplexObject(type.GetElementType());
```
This should return true for an array of "complex objects" (ones that do not meet the criteria specified in code). Just be warned, it'll also return true for an array of an array of complex objects, or an array of an array of an array. If that is a problem you can do make a change so that it'll only recurse once, like so:
```
private static bool IsComplexObject(Type type, bool recurse = true)
{
if (type.IsArray) return
(
recurse
? IsComplexObject(type.GetElementType(), false)
: false
);
//etc.
```
Upvotes: 3 [selected_answer]
|
2018/03/15
| 409 | 1,554 |
<issue_start>username_0: I'm very new to programming and I had a question about compiling of assembly language on Mac OS. I know that to convert my `.c` to an `.s` I should use `gcc -m32 -S`. However, I already wrote my own `.s` file. I was wondering if it was possible to convert the `.s` I've made so that my Mac can compile it.
The thing is I want to compile both the `.c` and `.s` to verify that they both return the same value.<issue_comment>username_1: Perhaps you want to check the array rank, and recur on the element type?
```
if (type.IsArray)
{
if (type.GetArrayRank() != 1)
{
return true;
}
Type elementType = type.GetElementType();
if (elementType.IsArray)
{
return true;
}
return IsComplexType(elementType);
}
```
Upvotes: 1 <issue_comment>username_2: Try retrieving the element type then calling `IsComplexObject` recursively, like so:
```
if (type.IsArray) return IsComplexObject(type.GetElementType());
```
This should return true for an array of "complex objects" (ones that do not meet the criteria specified in code). Just be warned, it'll also return true for an array of an array of complex objects, or an array of an array of an array. If that is a problem you can do make a change so that it'll only recurse once, like so:
```
private static bool IsComplexObject(Type type, bool recurse = true)
{
if (type.IsArray) return
(
recurse
? IsComplexObject(type.GetElementType(), false)
: false
);
//etc.
```
Upvotes: 3 [selected_answer]
|
2018/03/15
| 2,090 | 6,922 |
<issue_start>username_0: I have the following code, I know it's super inefficient, though I don't know how to make it simpler. Is there a way that button1 could just be changed to buttonX and everything could just be written once?
Apologies if this has been asked before. I have tried searching but it is quite a complicated thing to describe and I haven't found anything relevant.
```
var button1 = document.getElementById('toggle1');
var button2 = document.getElementById('toggle2');
var button3 = document.getElementById('toggle3');
var button4 = document.getElementById('toggle4');
var div1 = document.getElementById('div1');
var div2 = document.getElementById('div2');
var div3 = document.getElementById('div3');
var div4 = document.getElementById('div4');
button1.onclick = function() {
div1.style.display = 'block';
div2.style.display = 'none';
div3.style.display = 'none';
div4.style.display = 'none';
};
button2.onclick = function() {
div1.style.display = 'none';
div2.style.display = 'block';
div3.style.display = 'none';
div4.style.display = 'none';
};
button3.onclick = function() {
div1.style.display = 'none';
div2.style.display = 'none';
div3.style.display = 'block';
div4.style.display = 'none';
};
button4.onclick = function() {
div1.style.display = 'none';
div2.style.display = 'none';
div3.style.display = 'none';
div4.style.display = 'block';
};
```<issue_comment>username_1: Have a try with this - I have taken your code as example since you did not post HTML
The advantage of these two examples is that when you need to add a button and a div, the code does not change at all.
```js
const buttons = document.querySelectorAll("[id^toggle]");
for (let i = 0; i < buttons.length; i++) {
buttons[i].onclick = function() {
const divs = document.querySelectorAll("[id^=div]");
for (let i = 0; i < divs.length; i++) {
divs[i].style.display = divs[i].id == "div" + this.id.replace("toggle", "") ? "block" : "none";
}
}
}
```
Instead of ID selector and replace, you can use classes and data-attr
```js
const buttons = document.querySelectorAll(".toggleButton");
for (let i = 0; i < buttons.length; i++) {
buttons[i].onclick = function() {
const divs = document.querySelectorAll(".toggleDiv");
const showId = this.getAttribute("data-toggle");
for (let i = 0; i < divs.length; i++) {
divs[i].style.display = divs[i].id == showId ? "block" : "none";
}
}
}
```
assuming `Div1`
Upvotes: 1 [selected_answer]<issue_comment>username_2: You can use javascript objects of course!
```
// let's match keys (1, 2, 3, 4) to buttons
var buttons = {
1: document.getElementById('toggle1'),
2: document.getElementById('toggle2'),
3: document.getElementById('toggle3'),
4: document.getElementById('toggle4')
}
// let's match keys (1, 2, 3, 4) to divs
var divs = {
1: document.getElementById('div1'),
2: document.getElementById('div2'),
3: document.getElementById('div3'),
4: document.getElementById('div4')
}
// this function will hide all divs except the once given as a param
function hideAllBut(index) {
// go through each div
for (var i = 1; i <= 4; i++) {
// if i is equal to the param, display the div
if (i === index) {
divs[i].style.display = 'block'; // accessing the divs object which has key i
} else {
// else hide it
divs[i].style.display = 'none';
}
}
}
// now for each button, we call the function to hide all expect the one we want
buttons[1].onclick = function () {
hideAllBut(1);
};
buttons[2].onclick = function () {
hideAllBut(2);
};
buttons[3].onclick = function () {
hideAllBut(3);
};
buttons[4].onclick = function () {
hideAllBut(4);
};
```
Keeping that in mind, we can make the code even more dynamic
```
var buttons = {};
var divs = {};
// how many elements we have (here 4)
var totalElements = 4;
// assign buttons and divs values dynamically, while adding the onclick
for (var i = 1; i <= totalElements; i++) {
buttons[i] = document.getElementById('toggle' + i); // i.e. if i=1, 'toggle'+i = 'toggle1'
divs[i] = document.getElementById('div' + i);
buttons[i].onclick = function () {
hideAllBut(i);
};
}
// this function does the same as how we wrote it perviously but with a different format
function hideAllBut(index) {
for (var i = 1; i <= totalElements; i++) {
// we use the ternary operator
divs[i].style.display = (i === index) ? 'block' : 'none';
}
}
```
Read the documentation of the javascript object because you're gonna use it a lot and will help you do stuff dynamically. <https://www.w3schools.com/js/js_objects.asp>
Ternary operator: <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Conditional_Operator>
Upvotes: -1 <issue_comment>username_3: Consider placing everything into an array so you can do things by index.
```js
var buttons = [];
var divs = [];
for(let i = 0; i < 4; i++) {
buttons[i] = document.getElementById('toggle'+(i+1));
divs[i] = document.getElementById('div'+(i+1));
buttons[i].addEventListener('click', doClick(i));
}
function doClick(index) {
return function() {
for(let i = 0; i < 4; i++) {
divs[i].style.display = i === index ? 'block' : 'none';
}
};
}
doClick(0)();
```
```html
One
Two
Three
Four
1
2
3
4
```
In this code I am getting all four buttons and divs in a loop and setting up the event listeners for each button.
The function `doClick` is a curried function that uses a closure to set the `index` of the button that is being clicked on so we know which one we want to show and then we hide all of the others.
Upvotes: 0 <issue_comment>username_4: With [jQuery](https://jquery.com/) you could do the following:
```js
$('.toggleButton').on('click', function() {
var toggleDivId = '#toggleDiv' + $(this).attr('id').replace('toggleButton', '');
$('.toggleDiv').css('display', 'none');
$(toggleDivId).css('display', 'block');
});
```
```html
Button 1
Button 2
Button 3
Button 4
Div 1
Div 2
Div 3
Div 4
```
Upvotes: -1 <issue_comment>username_5: Here's an example using jQuery, that helps to avoid having to declare all the vars for each button first.
I also added a 'reset' button so you can get all the divs back. I'm not sure if this is what you wanted but to me this seems like the kind of thing jQuery is especially helpful for, so here is an example:
```js
$('button').on('click', function(){
$('.boxdiv').css({"display":"none"});
switch ($(this).attr('id')) {
case "1":
case "2":
case "3":
case "4":
$(`#div${$(this).attr('id')}`).css({"display":"block"});
break;
case "reset":
$('.boxdiv').css({"display":"block"});
break;
}
});
```
```html
B1
B2
B3
B4
Reset
```
Upvotes: -1
|
2018/03/15
| 757 | 2,487 |
<issue_start>username_0: I have an ABAP internal table. Structured, with several columns (e.g. 25). Names and types are irrelevant. The table can get pretty large (e.g. 5,000 records).
```
| A | B | ... |
| --- | --- | --- |
| 7 | X | ... |
| 2 | CCC | ... |
| 42 | DD | ... |
```
Now I'd like to set one of the columns (e.g. B) to a specific constant value (e.g. 'Z').
**What is the shortest, fastest, and most memory-efficient way to do this?**
My best guess is a `LOOP REFERENCE INTO`. This is pretty efficient as it changes the table in-place, without wasting new memory. But it takes up three statements, which makes me wonder whether it's possible to get shorter:
```
LOOP AT lt_table REFERENCE INTO DATA(ls_row).
ls_row->b = 'Z'.
ENDLOOP.
```
Then there is the `VALUE` operator which reduces this to one statement but is not very efficient because it creates new memory areas. It also gets longish for a large number of columns, because they have to be listed one by one:
```
lt_table = VALUE #( FOR ls_row in lt_table ( a = ls_row-a
b = 'Z' ) ).
```
Are there better ways?<issue_comment>username_1: If you have a workarea declared...
```
workarea-field = 'Z'.
modify table from workarea transporting field where anything.
```
*Edition:
After been able to check that syntax in my current system, I could prove (to myself) that the WHERE clause must be added or a DUMP is raised.
Thanks username_2.*
Upvotes: 0 <issue_comment>username_2: The following code sets PRICE = 0 of all lines at a time. Theoritically, it should be the fastest way to update all the lines of one column, because it's one statement. Note that it's impossible to omit the WHERE, so I use a simple trick to update all lines.
```
DATA flights TYPE TABLE OF sflight.
DATA flight TYPE sflight.
SELECT * FROM sflight INTO TABLE flights.
flight-price = 0.
MODIFY flights FROM flight TRANSPORTING price WHERE price <> flight-price.
```
Reference: [MODIFY itab - itab\_lines](https://help.sap.com/http.svc/rc/abapdocu_752_index_htm/7.52/en-US/index.htm?file=abapmodify_itab_multiple.htm)
Upvotes: 4 [selected_answer]<issue_comment>username_3: Based on [username_2s Answer](https://stackoverflow.com/a/49329631/19187112) if you don't have a workarea you can use:
```
DATA flights TYPE TABLE OF sflight.
SELECT * FROM sflight INTO TABLE flights.
MODIFY flights FROM VALUE #( price = 1 ) TRANSPORTING price WHERE price <> 0.
```
Upvotes: 0
|
2018/03/15
| 393 | 1,334 |
<issue_start>username_0: How can I achieve this? I am getting an error `incorrect syntax near between`
```
@STARTDATE, @ENDDATE, @CREATEDDATE -- INPUT PARAMETERS
SELECT *
FROM [DBO].[xxx] x
INNER JOIN [DBO].[yyy] y
ON x.ID = y.ID
AND CASE WHEN @CREATEDDATE = 1
THEN x.CreatedDate BETWEEN @STARTDATE AND @ENDDATE
WHEN @CREATEDDATE = 0
THEN x.ClosedDate BETWEEN @STARTDATE AND @ENDDATE
```<issue_comment>username_1: SQL Server does not treat boolean expressions as something that a `case` expression can return. In fact, `case` in `on` and `where` clauses is to be discouraged. And you can easily express this using more basic boolean operations:
```
SELECT *
FROM [DBO].[xxx] x INNER JOIN
[DBO].[yyy] y
ON x.ID = y.ID AND
( (@CREATEDDATE = 1 AND x.CreatedDate BETWEEN @STARTDATE AND @ENDDATE) OR
(@CREATEDDATE = 0 AND x.ClosedDate BETWEEN @STARTDATE AND @ENDDATE)
)
```
Upvotes: 2 <issue_comment>username_2: You can try the following query
```
SELECT *
FROM [DBO].[xxx] x
INNER JOIN [DBO].[yyy] y
ON x.ID = y.ID
AND CASE WHEN @CREATEDDATE = 1 AND x.CreatedDate BETWEEN @STARTDATE AND @ENDDATE THEN 1
WHEN @CREATEDDATE = 0 AND x.ClosedDate BETWEEN @STARTDATE AND @ENDDATE THEN 1
ELSE 0 END = 1
```
Upvotes: 0
|
2018/03/15
| 2,734 | 4,137 |
<issue_start>username_0: When I have many elements in an array, Julia REPL only shows some part of it. For example:
```
julia> x = rand(100,2);
julia> x
100×2 Array{Float64,2}:
0.277023 0.0826133
0.186201 0.76946
0.534247 0.777725
0.942698 0.0239694
0.498693 0.0285596
⋮
0.383858 0.959607
0.987775 0.20382
0.319679 0.69348
0.491127 0.976363
```
Is there any way to show all elements in the vertical form as above? `print(x)` or `showall(x)` put it in an ugly form without line changes.<issue_comment>username_1: *NOTE: in 0.7, `Base.STDOUT` has been renamed to `Base.stdout`. The rest should work unchanged.
---*
There are a lot of internally used methods in [`base/arrayshow.jl`](https://github.com/JuliaLang/julia/blob/f3118ce6a39b1adeac345c6ddd9c46d9f6676316/base/arrayshow.jl) doing stuff related to this. I found
```
Base.print_matrix(STDOUT, x)
```
to work. The limiting behaviour can be restored by using an `IOContext`:
```
Base.print_matrix(IOContext(STDOUT, :limit => true), x)
```
However, this method only prints the values, not the header information containing the type. But we can retrieve that header using `summary` (which I found out looking at [this](https://github.com/JuliaLang/julia/blob/92879ea7802804a0c16f70c081c56773ce3946a9/stdlib/LinearAlgebra/src/bidiag.jl#L219)).
Combining both:
```
function myshowall(io, x, limit = false)
println(io, summary(x), ":")
Base.print_matrix(IOContext(io, :limit => limit), x)
end
```
Example:
```
julia> myshowall(STDOUT, x[1:30, :], true)
30×2 Array{Float64,2}:
0.21730681784436 0.5737060668051441
0.6266216317547848 0.47625168078991886
0.9726153326748859 0.8015583406422266
0.2025063774372835 0.8980835847636988
0.5915731785584124 0.14211295083173403
0.8697483851126573 0.10711267862191032
0.2806684748462547 0.1663862576894135
0.87125664767098 0.1927759597335088
0.8106696671235174 0.8771542319415393
0.14276026457365587 0.23869679483621642
0.987513511756988 0.38605250840302996
⋮
0.9587892008777128 0.9823155299532416
0.893979917305394 0.40184945077330836
0.6248799650411605 0.5002213828574473
0.13922016844193186 0.2697416140839628
0.9614124092388507 0.2506075363030087
0.8403420376444073 0.6834231190218074
0.9141176587557365 0.4300133583400858
0.3728064777779758 0.17772360447862634
0.47579213503909745 0.46906998919124576
0.2576800028360562 0.9045669936804894
julia> myshowall(STDOUT, x[1:30, :], false)
30×2 Array{Float64,2}:
0.21730681784436 0.5737060668051441
0.6266216317547848 0.47625168078991886
0.9726153326748859 0.8015583406422266
0.2025063774372835 0.8980835847636988
0.5915731785584124 0.14211295083173403
0.8697483851126573 0.10711267862191032
0.2806684748462547 0.1663862576894135
0.87125664767098 0.1927759597335088
0.8106696671235174 0.8771542319415393
0.14276026457365587 0.23869679483621642
0.987513511756988 0.38605250840302996
0.8230271471019499 0.37242899586931943
0.9138200958138099 0.8068913133278408
0.8525161103718151 0.5975492199077801
0.20865490007184317 0.7176626477090138
0.708988887470049 0.8600690517032243
0.5858885634109547 0.9900228746877875
0.4207526577539027 0.4509115980616851
0.26721679563705836 0.38795692270409465
0.5627701589178917 0.5191793105440308
0.9587892008777128 0.9823155299532416
0.893979917305394 0.40184945077330836
0.6248799650411605 0.5002213828574473
0.13922016844193186 0.2697416140839628
0.9614124092388507 0.2506075363030087
0.8403420376444073 0.6834231190218074
0.9141176587557365 0.4300133583400858
0.3728064777779758 0.17772360447862634
0.47579213503909745 0.46906998919124576
0.2576800028360562 0.9045669936804894
```
**However**, I would wait for some opinions about whether `print_matrix` can be relied upon, given that it is not exported from Base...
Upvotes: 5 [selected_answer]<issue_comment>username_2: A short/clean solution is
```
show(stdout, "text/plain", rand(100,2))
```
Upvotes: 3
|
2018/03/15
| 929 | 2,827 |
<issue_start>username_0: I am very new to 2D Arrays in python. I am trying to create a 2D array that asks the user to input a name and then finds the name in the array and prints out what position that name is.
My code so far is:
```
pupil_name = [['Jess', 0], ['Tom', 1], ['Erik', 2]]
enter_pupil = input('Enter name of Pupil ')
print(str(pupil_name) + ' is sitting on chair number ' + str([]))
print(' ')
```
Is what I am asking possible? It is just for fun and would love to make it work. Thanks in advance<issue_comment>username_1: You want a dictionary.
```
>>> pupil_name = [['Jess', 0], ['Tom', 1], ['Erik', 2]]
>>> pupil_pos = dict(pupil_name)
>>>
>>> pupil_pos
{'Jess': 0, 'Erik': 2, 'Tom': 1}
>>> pupil_pos['Erik']
2
```
This gives you a mapping of names to positions, which you can query by providing the name.
Upvotes: 0 <issue_comment>username_2: As others have advised in comments you should use a dict.
You should also be using [raw\_input](https://docs.python.org/2/library/functions.html#raw_input) instead of [input](https://docs.python.org/2/library/functions.html#input) as it converts the user input to a `str`.
```
student_chair_numbers = {
'Jess': 0,
'Tom': 1,
'Erik': 2,
}
enter_pupil = raw_input('Enter name of Pupil ')
print(enter_pupil + ' is sitting on chair number ' + str(student_chair_numbers[enter_pupil]))
```
Upvotes: 0 <issue_comment>username_3: Here's a solution that uses your nested lists. See username_2's answer for a `dict` solution.
```
pupil_name = [['Jess', 0], ['Tom', 1], ['Erik', 2]]
def find_chair(name, chair_numbers):
for (n, c) in chair_numbers:
if n == name:
return c
return None
enter_pupil = input('Enter name of Pupil ')
print(str(enter_pupil) + ' is sitting on chair number ' + str(find_chair(enter_pupil, pupil_name)))
print(' ')
```
Upvotes: 0 <issue_comment>username_4: You should use a dictionary, as others pointed out. However, if you still want to keep a 2D list, here is what you can do:
```
pupil_name = [['Jess', 0], ['Tom', 1], ['Erik', 2]]
enter_pupil = input('Enter name of Pupil ')
for pupil, seat in pupil_name:
if pupil == enter_pupil:
print('{} is seating at chair number {}'.format(pupil, seat))
break
else:
print('Not found: {}'.format(enter_pupil))
```
Notes
=====
* The code loops through `pupil_name` and each iteration assigned the sub list to `pupil` and `seat`.
* If we found a match, we print out the name and seat and break out of the loop. There is no point to keep looping since we found what we want.
* The `else` clause is an interesting and unique aspect of Python `for` loop: if we loop through all names/seats and did not break (i.e. did not find the name), then the code under the `else` clause is executed.
Upvotes: 3 [selected_answer]
|
2018/03/15
| 823 | 2,943 |
<issue_start>username_0: I have the following code to listen to click events on div elements.
HTML:
```
```
JavaScript:
```
document.getElementById("container").addEventListener("click",function(e) {
if (e.target && e.target.matches("div"))
{
console.log("Square element clicked!");
SquareBackground=this.style.backgroundColor;
console.log(SquareBackground);
SquareId=this.getAttribute('id');
console.log(SquareId);
}
});
```
I have created an array for setting the background color of the div elements and the code works properly in the browser. When I click on any square divs, I am getting the "Square Element clicked" message. However, when I try to print the background color of the clicked div, I am getting an empty value in the console. Also, for the second output, I am getting "container" instead of the id of the clicked div. Please help.
EDIT:
This function is used to set the background color for the squares. The array combinedColors[] contains randomly generated RGB values eg. rgb(255,0,9).
```
function changeSquareColor()
{
for(i=0;i
```
CSS:
```
.square{
width: 20%;
background: blue;
float:left;
padding-bottom: 20%;
margin: 1.66%;
}
body{
background-color: black;
}
#container{
margin: 20px auto;
width: 600px;
};
```<issue_comment>username_1: [**`HTMLElement.style`**](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/style) will get only inline styles ().
The `backgroundColor` in your element comes from its class (I guess), not an inline style.
To get the currently applied/computed style, you have to use [**`window.getComputedStyle`**](https://developer.mozilla.org/en-US/docs/Web/API/Window/getComputedStyle):
```js
var element = document.getElementById('blueDiv');
var pre = document.getElementById('style');
pre.innerHTML = 'Its background color is: ' + window.getComputedStyle(element).getPropertyValue("background-color");
```
```css
.blue{
background-color: blue;
padding: 5px;
font-family: sans-serif;
font-weight: bold;
font-size: 20px;
color: white;
}
```
```html
I'm blue da bu dee da bu die
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: This seems to work for some reason if you set the background color in a css class it won't see it but inline it retrieves it fine. Also the this code wasn't returning and id or color but if you use the target div you can get its properties.
```js
document.getElementById("container").addEventListener("click", function(e) {
if (e.target && e.target.matches("div")) {
console.log("Square element clicked!");
SquareBackground = e.target.style.backgroundColor;
console.log(SquareBackground);
SquareId = e.target.getAttribute('id');
console.log(SquareId);
}
});
```
```css
.square {
border: 1px solid black;
height: 50px;
width: 50px;
}
```
```html
```
Upvotes: 2
|
2018/03/15
| 576 | 2,062 |
<issue_start>username_0: I've got this somewhat ugly code writing to a bunch of properties. It seems I should be able to loop through a list to do this. How would I loop through `['command','options','library']` and set the associated property?
```
try:
self.command = data\_dict['command']
except KeyError:
pass
try:
self.options = data\_dict['options']
except KeyError:
pass
try:
self.library = data\_dict['library']
except KeyError:
pass
```<issue_comment>username_1: [**`HTMLElement.style`**](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/style) will get only inline styles ().
The `backgroundColor` in your element comes from its class (I guess), not an inline style.
To get the currently applied/computed style, you have to use [**`window.getComputedStyle`**](https://developer.mozilla.org/en-US/docs/Web/API/Window/getComputedStyle):
```js
var element = document.getElementById('blueDiv');
var pre = document.getElementById('style');
pre.innerHTML = 'Its background color is: ' + window.getComputedStyle(element).getPropertyValue("background-color");
```
```css
.blue{
background-color: blue;
padding: 5px;
font-family: sans-serif;
font-weight: bold;
font-size: 20px;
color: white;
}
```
```html
I'm blue da bu dee da bu die
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: This seems to work for some reason if you set the background color in a css class it won't see it but inline it retrieves it fine. Also the this code wasn't returning and id or color but if you use the target div you can get its properties.
```js
document.getElementById("container").addEventListener("click", function(e) {
if (e.target && e.target.matches("div")) {
console.log("Square element clicked!");
SquareBackground = e.target.style.backgroundColor;
console.log(SquareBackground);
SquareId = e.target.getAttribute('id');
console.log(SquareId);
}
});
```
```css
.square {
border: 1px solid black;
height: 50px;
width: 50px;
}
```
```html
```
Upvotes: 2
|
2018/03/15
| 1,223 | 4,953 |
<issue_start>username_0: I have the following layout in eclipse SWT: A GridLayout with one column and two children, one at the top, one at the bottom. The bottom one is the Expandpar. When the ExpandBar is expanded or collapsed, the size of the two children should be updated. Here is the Code:
```
package tests.julia.layout;
import org.eclipse.swt.SWT;
import org.eclipse.swt.layout.FillLayout;
import org.eclipse.swt.layout.GridData;
import org.eclipse.swt.layout.GridLayout;
import org.eclipse.swt.widgets.Composite;
import org.eclipse.swt.widgets.Display;
import org.eclipse.swt.widgets.Event;
import org.eclipse.swt.widgets.ExpandBar;
import org.eclipse.swt.widgets.ExpandItem;
import org.eclipse.swt.widgets.Group;
import org.eclipse.swt.widgets.Listener;
import org.eclipse.swt.widgets.Shell;
public class MyLayoutExpandBar {
public static void main (String [] args) {
final Display display = new Display ();
Shell shell = new Shell (display);
shell.setLayout (new FillLayout());
final Composite parent = new Composite (shell, SWT.BORDER);
parent.setLayout(new GridLayout());
Composite comp = new Composite(parent, SWT.BORDER);
comp.setLayoutData(new GridData(SWT.FILL, SWT.FILL, true, true));
final ExpandBar expandBar = new ExpandBar (parent, SWT.V_SCROLL);
expandBar.setLayoutData(new GridData(SWT.FILL, SWT.END, true, false));
ExpandItem item = new ExpandItem (expandBar, SWT.NONE);
item.setText("Expand Item");
item.setHeight(200);
item.setExpanded(true);
Group group = new Group(expandBar, SWT.NONE);
group.setText("Item Content");
item.setControl(group);
/* Listener */
expandBar.addListener(SWT.Expand, new Listener() {
@Override
public void handleEvent(Event event) {
display.asyncExec(new Runnable() {
@Override
public void run() {
System.out.println("Expand: " + expandBar.getSize());
parent.layout();
System.out.println("Expand: " + expandBar.getSize());
}
});
}
});
expandBar.addListener(SWT.Collapse, new Listener() {
@Override
public void handleEvent(Event event) {
display.asyncExec(new Runnable() {
@Override
public void run() {
System.out.println("Collapse: " + expandBar.getSize());
parent.layout();
System.out.println("Collapse: " + expandBar.getSize());
}
});
}
});
shell.setText("Shell");
shell.setSize(300, 300);
shell.open ();
while (!shell.isDisposed ()) {
if (!display.readAndDispatch ())
display.sleep ();
}
display.dispose ();
}
}
```
This works correct under Windows, but under Linux the size of the ExpandBar is small when expanded and large when collapsed, so just opposite to what it should be. When the Shell is resized (by dragging its border) the size of the ExpandBar becomes correct.
Does anyone know how to make this right?
Thanks, Julia<issue_comment>username_1: [**`HTMLElement.style`**](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/style) will get only inline styles ().
The `backgroundColor` in your element comes from its class (I guess), not an inline style.
To get the currently applied/computed style, you have to use [**`window.getComputedStyle`**](https://developer.mozilla.org/en-US/docs/Web/API/Window/getComputedStyle):
```js
var element = document.getElementById('blueDiv');
var pre = document.getElementById('style');
pre.innerHTML = 'Its background color is: ' + window.getComputedStyle(element).getPropertyValue("background-color");
```
```css
.blue{
background-color: blue;
padding: 5px;
font-family: sans-serif;
font-weight: bold;
font-size: 20px;
color: white;
}
```
```html
I'm blue da bu dee da bu die
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: This seems to work for some reason if you set the background color in a css class it won't see it but inline it retrieves it fine. Also the this code wasn't returning and id or color but if you use the target div you can get its properties.
```js
document.getElementById("container").addEventListener("click", function(e) {
if (e.target && e.target.matches("div")) {
console.log("Square element clicked!");
SquareBackground = e.target.style.backgroundColor;
console.log(SquareBackground);
SquareId = e.target.getAttribute('id');
console.log(SquareId);
}
});
```
```css
.square {
border: 1px solid black;
height: 50px;
width: 50px;
}
```
```html
```
Upvotes: 2
|
2018/03/15
| 595 | 2,332 |
<issue_start>username_0: I am getting a warning about the usage of deprecated features in my build.
Is there a way to list all the deprecated features so that I may go through and update my code?
\***clarification**
I know I can go to the Gradle documentation and see what is now deprecated, what I would specifically like is a way to go through MY code and list MY deprecated features.<issue_comment>username_1: Use Gradle option `-Dorg.gradle.warning.mode=(all,none,summary)` to control verbosity, for example, mode `all` will log all warnings with detailed descriptions:
```sh
./gradlew build -Dorg.gradle.warning.mode=all
```
More details can be found in the official documentation: [Showing or hiding warnings](https://docs.gradle.org/current/userguide/command_line_interface.html#sec:command_line_warnings)
Upvotes: 4 <issue_comment>username_2: 1. Go to the build.Gradle (Module) file.
2. replace **Compile** with **implementation**.
3. Sync the gradle file and you will not receive the warning again.
Upvotes: 0 <issue_comment>username_3: I just faced the exact same problem and running the Gradle build task every time through the command line wasn't the best option for me because, during development, I usually just use the built-in Gradle build task run, so:
>
> I know I can go to the Gradle documentation and see what is now deprecated, what I would specifically like is a way to go through MY code and list out MY deprecated features.
>
>
>
You can do this by adding the mentioned `--warning-mode=all` flag to your gradle command line options in your Android Studio settings:
[](https://i.stack.imgur.com/ZZ5GL.png)
This will print the proper warnings for you to be aware of what are the specific deprecated features **your app** is using.
Also, I know you asked this near a year ago, but, it might be useful for other people facing the same issue.
Upvotes: 5 <issue_comment>username_4: In order to change the warning verbosity level in the Android Studio IDE, you can add the option `org.gradle.warning.mode=(all,none,summary)` to the **gradle.properties** file, which can be found in the root directory of your project.
Add following line to set the warning mode to `all`:
```
...
org.gradle.warning.mode=all
...
```
Upvotes: 4
|
2018/03/15
| 527 | 2,079 |
<issue_start>username_0: I'm trying to create a method that checks a date to see if it already exists within my database. However I'm getting an exception that the program is having trouble converting from varchar. Anyone able to tell me why it's doing so?
```
public static int CheckDate(DateTime date)
{
using (SqlConnection connection = new SqlConnection(_connectionstring))
{
int nooforders = 0;
connection.Open();
string SqlQuery = string.Format("SELECT COUNT(OrderID) AS 'OrderCount' FROM OrderTable WHERE DateofCollectionDelivery = '{0}'", date);
SqlCommand datecheck = new SqlCommand(SqlQuery, connection);
SqlDataReader sqlDataReader = datecheck.ExecuteReader();
while (sqlDataReader.Read())
{
nooforders = (int)sqlDataReader["OrderCount"];
}
connection.Close();
return nooforders;
}
}
```
This is the method I run and this is the exception:
```
An unhandled exception of type 'System.Data.SqlClient.SqlException' occurred in System.Data.dll
Additional information: The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
```
The fields in the table have a datatype of datetime, so I'm unsure why it's throwing the exception.<issue_comment>username_1: Use the format mm/dd/yyyy or yyyy-mm-dd. use the DateTime.Parse() function to parse your date to this format.
Upvotes: -1 <issue_comment>username_2: Try solution provided in following link:
[SQL - The conversion of a varchar data type to a datetime data type resulted in an out-of-range value](https://stackoverflow.com/questions/20838344/sql-the-conversion-of-a-varchar-data-type-to-a-datetime-data-type-resulted-in)
or try this one
[Conversion of a varchar data type to a datetime data type resulted in an out-of-range value in SQL query](https://stackoverflow.com/questions/28271414/conversion-of-a-varchar-data-type-to-a-datetime-data-type-resulted-in-an-out-of?lq=1)
Upvotes: 2 [selected_answer]
|
2018/03/15
| 792 | 2,586 |
<issue_start>username_0: I'm trying to build a system where, if user lands on a page for the first time nothing should happen, but if same user visit again then that page should not load and instead he should go to a different URL.
```
function session() {
if (document.cookie.indexOf("visited") > 0) {
window.location.href = "www.google.com";
} else {
document.cookie = "visited";
}
}
```
Here is the completed html just to test it's workig
```
1st visit
function session() {
if (document.cookie.indexOf("visited") > 0) {
window.location.href = "www.google.com";
} else {
document.cookie = "visited";
}
}
```<issue_comment>username_1: Here is a solution based on the [Duplicate link](https://stackoverflow.com/a/27370319/1220550) that I posted.
I am not posting it as a Code Snippet, because for security reasons Code Snippets can't seem to work with cookies. Instead here is a working **[JSFiddle](https://jsfiddle.net/1t7mxzoq/6/)**.
```
function setCookie(c_name,value,exdays){var exdate=new Date();exdate.setDate(exdate.getDate() + exdays);var c_value=escape(value) + ((exdays==null) ? "" : "; expires="+exdate.toUTCString());document.cookie=c_name + "=" + c_value;}
function getCookie(c_name){var c_value = document.cookie;var c_start = c_value.indexOf(" " + c_name + "=");if (c_start == -1){c_start = c_value.indexOf(c_name + "=");}if (c_start == -1){c_value = null;}else{c_start = c_value.indexOf("=", c_start) + 1;var c_end = c_value.indexOf(";", c_start);if (c_end == -1){c_end = c_value.length;}c_value = unescape(c_value.substring(c_start,c_end));}return c_value;}
checkSession();
function checkSession(){
var c = getCookie("visited");
if (c === "yes") {
alert("Welcome back!");
} else {
alert("Welcome new visitor!");
}
setCookie("visited", "yes", 365); // expire in 1 year; or use null to never expire
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: I'm recommend to use local storage =)
```js
var first_visit = false;
checkFirstVisit();
function checkFirstVisit(){
if(localStorage.getItem('was_visited')){
return;
}
first_visit = true;
localStorage.setItem('was_visited', 1);
}
console.log(first_visit);
```
Upvotes: 2 <issue_comment>username_3: You may do this
```js
function userfirstcheck(){
var usercheck = localStorage.getItem('delete')
if(usercheck == null){
alert('hi')
localStorage.setItem("delete", "userforfirst")
}
else if(usercheck != null){
alert('welcome back')
}
}
```
Upvotes: 0
|
2018/03/15
| 445 | 1,312 |
<issue_start>username_0: I have created a list containing 10 arrays that consist of 20 random numbers between 0 and 1 each.
Now, I wish to multiply each array in the list with the numbers `0.05`, `0.1`, ..., to `1.0` so that none of the elements in each array is larger than the number it is multiplied with.
For example, all the `20` elements in the first array should lie between `0` and `0.05`, all the elements in the second array between `0` and `0.10` and so on.
I create a list of `10` random arrays and a range of numbers between `0` and `1` with:
```
range1 = np.arange(0.005, 0.105, 0.005)
noise1 = [abs(np.random.uniform(0,1,20)) for i in range(10)]
```
I then try to multiply the elements with:
```
noise2 = [noise1 * range1 for i in noise1]
```
But this doesn't work and just causes all the arrays in the list to have the same values.
I would really appreciate some help with how to do this.<issue_comment>username_1: Hoping I have clearly understood the question and hence providing this solution.
`noise2 = [noise1[i] * range1[i] for i in range(len(noise1))]`
Upvotes: 2 [selected_answer]<issue_comment>username_2: A more **pythonic** way would be using `zip`:
```
range1 = [1, 2, 3]
noise1 = [3, 4, 5]
noise2 = [i * j for i, j in zip(range1, noise1)]
# [3, 8, 15]
```
Upvotes: 0
|
2018/03/15
| 3,231 | 11,418 |
<issue_start>username_0: Flow duration curves are a common way in hydrology (and other fields) to visualize timeseries. They allow an easy assessment of the high and low values in a timeseries and how often certain values are reached. Is there an easy way in Python to plot it? I could not find any matplotlib tools, which would allow it. Also no other package seems to include it, at least not with the possibility to plot a range of flow duration curves easily.
An example for a flow duration curve would be:
[](https://i.stack.imgur.com/T9GkD.jpg)
An explantion on how to create it in general can be found here:
<http://www.renewablesfirst.co.uk/hydropower/hydropower-learning-centre/what-is-a-flow-duration-curve/>
So the basic calculation and plotting of the flow duration curve are pretty straightforward. Simply calculate the exceedence and plot it against the sorted timeseries (see the answer of ImportanceOfBeingErnest). It gets more difficult though if you have several timeseries and want to plot the range of the values for all exceedence probabilities. I present one solution in my answer to this thread, but would be glad to hear more elegant solutions. My solution also incorporates an easy use as a subplot, as it is common to have several timeseries for different locations, that have to be plotted seperately.
An example for what I mean with range of flow duration curves would be this:
[](https://i.stack.imgur.com/AGrLr.png)
Here you can see three distinct curves. The black line is the measured value from a river, while the two shaded areas are the range for all model runs of those two models. So what would be the most easy way to calculate and plot a range of flow duration curves for several timeseries?<issue_comment>username_1: EDIT: As my first answer was overly complicated and unelegant, I rewrote it to incorporate the solutions by username_2. I still keep the new version here, alongside the one by username_2, because I think the additional functionality might make it easier for other people to plot flow duration curves for their timeseries. If someone might have additional ideas see: [Github Repository](https://github.com/zutn/Flow-Duration-Curve)
Features are:
* Changing the percentiles for a range flow duration curve
* Easy usage as standalone figure or subplot. If an subplot object is provided the flow duration curve is drawn in this one. When None is provided it creates one and returns it
* Seperate kwargs for the range curve and its comparison
* Changing the y-axis to logarithmic scale with a keyword
* Extended example to help understand its usage.
The code is the following:
```
# -*- coding: utf-8 -*-
"""
Created on Thu Mar 15 10:09:13 2018
@author: <NAME>
"""
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
def flow_duration_curve(x, comparison=None, axis=0, ax=None, plot=True,
log=True, percentiles=(5, 95), decimal_places=1,
fdc_kwargs=None, fdc_range_kwargs=None,
fdc_comparison_kwargs=None):
"""
Calculates and plots a flow duration curve from x.
All observations/simulations are ordered and the empirical probability is
calculated. This is then plotted as a flow duration curve.
When x has more than one dimension along axis, a range flow duration curve
is plotted. This means that for every probability a min and max flow is
determined. This is then plotted as a fill between.
Additionally a comparison can be given to the function, which is plotted in
the same ax.
:param x: numpy array or pandas dataframe, discharge of measurements or
simulations
:param comparison: numpy array or pandas dataframe of discharge that should
also be plotted in the same ax
:param axis: int, axis along which x is iterated through
:param ax: matplotlib subplot object, if not None, will plot in that
instance
:param plot: bool, if False function will not show the plot, but simply
return the ax object
:param log: bool, if True plot on loglog axis
:param percentiles: tuple of int, percentiles that should be used for
drawing a range flow duration curve
:param fdc_kwargs: dict, matplotlib keywords for the normal fdc
:param fdc_range_kwargs: dict, matplotlib keywords for the range fdc
:param fdc_comparison_kwargs: dict, matplotlib keywords for the comparison
fdc
return: subplot object with the flow duration curve in it
"""
# Convert x to an pandas dataframe, for easier handling
if not isinstance(x, pd.DataFrame):
x = pd.DataFrame(x)
# Get the dataframe in the right dimensions, if it is not in the expected
if axis != 0:
x = x.transpose()
# Convert comparison to a dataframe as well
if comparison is not None and not isinstance(comparison, pd.DataFrame):
comparison = pd.DataFrame(comparison)
# And transpose it is neccesary
if axis != 0:
comparison = comparison.transpose()
# Create an ax is neccesary
if ax is None:
fig, ax = plt.subplots(1,1)
# Make the y scale logarithmic if needed
if log:
ax.set_yscale("log")
# Determine if it is a range flow curve or a normal one by checking the
# dimensions of the dataframe
# If it is one, make a single fdc
if x.shape[1] == 1:
plot_single_flow_duration_curve(ax, x[0], fdc_kwargs)
# Make a range flow duration curve
else:
plot_range_flow_duration_curve(ax, x, percentiles, fdc_range_kwargs)
# Add a comparison to the plot if is present
if comparison is not None:
ax = plot_single_flow_duration_curve(ax, comparison[0],
fdc_comparison_kwargs)
# Name the x-axis
ax.set_xlabel("Exceedence [%]")
# show if requested
if plot:
plt.show()
return ax
def plot_single_flow_duration_curve(ax, timeseries, kwargs):
"""
Plots a single fdc into an ax.
:param ax: matplotlib subplot object
:param timeseries: list like iterable
:param kwargs: dict, keyword arguments for matplotlib
return: subplot object with a flow duration curve drawn into it
"""
# Get the probability
exceedence = np.arange(1., len(timeseries) + 1) / len(timeseries)
exceedence *= 100
# Plot the curve, check for empty kwargs
if kwargs is not None:
ax.plot(exceedence, sorted(timeseries, reverse=True), **kwargs)
else:
ax.plot(exceedence, sorted(timeseries, reverse=True))
return ax
def plot_range_flow_duration_curve(ax, x, percentiles, kwargs):
"""
Plots a single range fdc into an ax.
:param ax: matplotlib subplot object
:param x: dataframe of several timeseries
:param decimal_places: defines how finely grained the range flow duration
curve is calculated and drawn. A low values makes it more finely grained.
A value which is too low might create artefacts.
:param kwargs: dict, keyword arguments for matplotlib
return: subplot object with a range flow duration curve drawn into it
"""
# Get the probabilites
exceedence = np.arange(1.,len(np.array(x))+1) /len(np.array(x))
exceedence *= 100
# Sort the data
sort = np.sort(x, axis=0)[::-1]
# Get the percentiles
low_percentile = np.percentile(sort, percentiles[0], axis=1)
high_percentile = np.percentile(sort, percentiles[1], axis=1)
# Plot it, check for empty kwargs
if kwargs is not None:
ax.fill_between(exceedence, low_percentile, high_percentile, **kwargs)
else:
ax.fill_between(exceedence, low_percentile, high_percentile)
return ax
```
How to use it:
```
# Create test data
np_array_one_dim = np.random.rayleigh(5, [1, 300])
np_array_75_dim = np.c_[np.random.rayleigh(11 ,[25, 300]),
np.random.rayleigh(10, [25, 300]),
np.random.rayleigh(8, [25, 300])]
df_one_dim = pd.DataFrame(np.random.rayleigh(9, [1, 300]))
df_75_dim = pd.DataFrame(np.c_[np.random.rayleigh(8, [25, 300]),
np.random.rayleigh(15, [25, 300]),
np.random.rayleigh(3, [25, 300])])
df_75_dim_transposed = pd.DataFrame(np_array_75_dim.transpose())
# Call the function with all different arguments
fig, subplots = plt.subplots(nrows=2, ncols=3)
ax1 = flow_duration_curve(np_array_one_dim, ax=subplots[0,0], plot=False,
axis=1, fdc_kwargs={"linewidth":0.5})
ax1.set_title("np array one dim\nwith kwargs")
ax2 = flow_duration_curve(np_array_75_dim, ax=subplots[0,1], plot=False,
axis=1, log=False, percentiles=(0,100))
ax2.set_title("np array 75 dim\nchanged percentiles\nnolog")
ax3 = flow_duration_curve(df_one_dim, ax=subplots[0,2], plot=False, axis=1,
log=False, fdc_kwargs={"linewidth":0.5})
ax3.set_title("\ndf one dim\nno log\nwith kwargs")
ax4 = flow_duration_curve(df_75_dim, ax=subplots[1,0], plot=False, axis=1,
log=False)
ax4.set_title("df 75 dim\nno log")
ax5 = flow_duration_curve(df_75_dim_transposed, ax=subplots[1,1],
plot=False)
ax5.set_title("df 75 dim transposed")
ax6 = flow_duration_curve(df_75_dim, ax=subplots[1,2], plot=False,
comparison=np_array_one_dim, axis=1,
fdc_comparison_kwargs={"color":"black",
"label":"comparison",
"linewidth":0.5},
fdc_range_kwargs={"label":"range_fdc"})
ax6.set_title("df 75 dim\n with comparison\nwith kwargs")
ax6.legend()
# Show the beauty
fig.tight_layout()
plt.show()
```
The results look like this:
[](https://i.stack.imgur.com/U4cXd.jpg)
Upvotes: 3 [selected_answer]<issue_comment>username_2: If I understand the concept of a flow duration curve correctly, you just plot the flow as a function of the exceedence.
```
import numpy as np
import matplotlib.pyplot as plt
data = np.random.rayleigh(10,144)
sort = np.sort(data)[::-1]
exceedence = np.arange(1.,len(sort)+1) / len(sort)
plt.plot(exceedence*100, sort)
plt.xlabel("Exceedence [%]")
plt.ylabel("Flow rate")
plt.show()
```
[](https://i.stack.imgur.com/qEt36.png)
From this you easily read that a flow rate of 11 or larger is expected 60% of the time.
---
In case there are several datasets one may use `fill_between` to plot them as a range.
```
import numpy as np; np.random.seed(42)
import matplotlib.pyplot as plt
data0 = np.random.rayleigh(10,144)
data1 = np.random.rayleigh(9,144)
data2 = np.random.normal(10,5,144)
data = np.c_[data0, data1, data2]
exceedence = np.arange(1.,len(data)+1) /len(data)
sort = np.sort(data, axis=0)[::-1]
plt.fill_between(exceedence*100, np.min(sort, axis=1),np.max(sort, axis=1))
plt.xlabel("Exceedence [%]")
plt.ylabel("Flow rate")
plt.grid()
plt.show()
```
[](https://i.stack.imgur.com/3fOaL.png)
Upvotes: 3
|
2018/03/15
| 1,078 | 3,648 |
<issue_start>username_0: In my case it is permissions and roles so i combined result with union all
I just want check condition if user permission value = 0 then pick this else other one i am trying like this
```
SELECT username, orgId, uid, pid, perm FROM (
SELECT users.username, users.orgId, user_roles.uid, role_perms.pid, role_perms.value AS perm
FROM user_roles INNER JOIN role_perms
ON (user_roles.rid = role_perms.rid) INNER JOIN user_users ON(user_roles.uid = users.uid) WHERE role_perms.pid = 9 AND users.orgId = 2 AND users.username IS NOT NULL AND users.username != ''
UNION ALL
SELECT users.username, users.orgId, user_perms.uid, user_perms.pid, user_perms.value AS perm
FROM user_perms INNER JOIN users ON(user_perms.uid = users.uid) WHERE user_perms.pid = 9 AND user_users.orgId = 2 AND user_users.username is not null and user_users.username != '') AS combPerm;
```
It gives result as if permission is denied for one user in user\_perm table but that user also have role that contain particular permission
```
username | orgId | uid | pid | perm
<EMAIL> 2 11 9 0
<EMAIL> 2 11 9 1
<EMAIL> 2 91 9 1
<EMAIL> 2 88 9 1
```
In result i want <EMAIL> only one time if it has perm 0 from user\_perms table and all other record same, desired result is as
```
username | orgId | uid | pid | perm
<EMAIL> 2 11 9 0
<EMAIL> 2 91 9 1
<EMAIL> 2 88 9 1
```<issue_comment>username_1: `SELECT * FROM (YOUR QUERY) t WHERE perm=0 GROUP BY username, orgId, uid,pid;`
Upvotes: 0 <issue_comment>username_2: You can just take the `min(perm)` and add a group by to your query
```
SELECT username, orgId, uid, pid, min(perm) FROM (
-----the rest of the query
) AS combPerm group by username, orgId, uid, pid;
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You have accepted username_2's answer on how to give perm 0 precedence over perm 1. But now you say, you would rather give query 2 results precedence over query 1 results.
In standard SQL this would be easily done. You'd add a rank key to the queries (`select 1/2 as rankkey, ...`) and rank your results with `ROW_NUMBER` and keep the best match or you'd do this with `FETCH WITH TIES`.
MySQL does neither support window functions such as `ROW_NUMBER` nor a limit clause that allows for ties. Many such things are solved with variables in queries in MySQL. Here is another trick:
What you have is something like
```
select username, orgId, uid, pid, perm from table1
union all
select username, orgId, uid, pid, perm from table2;
```
and username_2 has shown you how to get the minimum perm per username, orgId, uid, pid. You however want somthing close but different :-) Here you go:
```
select username, orgId, uid, pid, max(keyperm) % 10 as perm
from
(
select username, orgId, uid, pid, perm + 10 as keyperm from table1
union all
select username, orgId, uid, pid, perm + 20 as keyperm from table2
)
group by username, orgId, uid, pid
order by username, orgId, uid, pid;
```
What is happpening here? You see I add 10 to the perm in the first query and 20 to the perm in the second query. `MAX` gives me the higher of the two, which is the second if both are present. We end up with a perm that is either 10 or 11 or 20 or 21. The operation % 10 (modulo ten) removes the ten's place and only keeps the one's place, which is 0 or 1, the perm.
(If perms can be two digits, add 100 and 200 and use %100, if more digits ... well you got the idea.)
Upvotes: 1
|
2018/03/15
| 888 | 3,413 |
<issue_start>username_0: I have a mongoose model in which some fields are like :
```
var AssociateSchema = new Schema({
personalInformation: {
familyName: { type: String },
givenName: { type: String }
}
})
```
I want to perform a '$regex' on the concatenation of familyName and givenName (something like 'familyName + " " + 'givenName'), for this purpose I'm using aggregate framework with $concat inside $project to produce a 'fullName' field and then '$regex' inside $match to search on that field. The code in mongoose for my query is:
```
Associate.aggregate([
{ $project: {fullName: { $concat: [
'personalInformation.givenName','personalInformation.familyName']}}},
$match: { fullName: { 'active': true, $regex: param, $options: 'i' } }}
])
```
But it's giving me error:
>
> MongoError: $concat only supports strings, not double on the first
> stage of my aggregate pipeline i.e $project stage.
>
>
>
Can anyone point out what I'm doing wrong ?<issue_comment>username_1: I managed to make it work by using $substr method, so the $project part of my aggregate pipeline is now:
```
`$project: {
fullName: {
$concat: [
{ $substr: ['$personalInformation.givenName', 0, -1] }, ' ', { $substr: ['$personalInformation.familyName', 0, -1] }
]
}
}
}`
```
Upvotes: 0 <issue_comment>username_2: Looking at your code, I'm not sure why $concat isn't working for you unless you've had some integers sneak into some of your document fields. Have you tried having a $-sign in front of your concatenated values? as in, '$personalInformation.givenName'? Are you sure every single familyName and givenName is a string, not a double, in your collection? All it takes is one double for your $concat to fold.
In any case, I had a similar type mismatch problem with actual doubles. $concat indeed supports only strings, and usually, all you'd do is cast any non-strings to strings.. but alas, at the time of this writing MongoDB 3.6.2 does not yet support integer/double => string casting, only date => string casting. Sad face.
That said, try adding this projection hack at the top of your query. This worked for me as a typecast. Just make sure you provide a long enough byte length (128-byte name is pretty long so you should be okay).
```
{
$project: {
castedGivenName: {
$substrBytes: [ 'personalInformation.givenName', 0, 128 ]
},
castedFamilyName: {
$substrBytes: [ 'personalInformation.familyName', 0, 128 ]
}
},
{
$project: {
fullName: {
$concat: [
'$castedGivenName',
'$castedFamilyName'
]
}
}
},
{
$match: { fullName: { 'active': true, $regex: param, $options: 'i' } }
}
```
Upvotes: 0 <issue_comment>username_3: I also got this error and then discovered that indeed one of the documents in the collection was to blame. They way I fished it out was by filtering by field type as [explained in the docs](https://docs.mongodb.com/manual/reference/operator/query/type/):
```
db.addressBook.find( { "zipCode" : { $type : "double" } } )
```
I found the field had the value `NaN`, which to my eyes wouldn't be a number, but mongodb interprets it as such.
Upvotes: 1
|
2018/03/15
| 959 | 3,385 |
<issue_start>username_0: I am trying to assign default value inside a sql statement like.
```
SELECT (CASE WHEN sd.IID IS NULL THEN 0 ELSE sd.IID END) AS IID,
pd.IID AS PurchaseOrerDetailsId,
i.[Description] AS Item,
sd.BatchNo, s.[Description] AS Unit,
CONVERT(varchar, sd.MfgDt, 103) AS MfgDt,
sd.Qty = 0,
CONVERT(varchar, sd.ExpiryDate, 103) AS ExpiryDate,
sd.PackSize ='',
pd.Qty = 0 AS QtyOrdered,
sd.MRP, sd.PTR,
sd.PurchaseRate,
sd.PTS,
sd.CGST,
sd.SGST,
sd.IGST,
DiscPer,
DiscVal,
sd.Qty * sd.PurchaseRate AS PurchaseValue,
(sd.Qty * sd.PurchaseRate * sd.CGST)/100 AS CGSTAmt,
(sd.Qty * sd.PurchaseRate * sd.SGST)/100 AS SGSTAmt,
(sd.Qty * sd.PurchaseRate * sd.IGST)/100 AS IGSTAmt,
i.IID AS ItemId
FROM PurchaseOrderDetails pd
```
But you can see "Qty = 0" or PackSize ='' inside SELECT statement will not work. How can I assign values inside a SELECT statement for for multiple fields.
Thanks
Partha<issue_comment>username_1: I managed to make it work by using $substr method, so the $project part of my aggregate pipeline is now:
```
`$project: {
fullName: {
$concat: [
{ $substr: ['$personalInformation.givenName', 0, -1] }, ' ', { $substr: ['$personalInformation.familyName', 0, -1] }
]
}
}
}`
```
Upvotes: 0 <issue_comment>username_2: Looking at your code, I'm not sure why $concat isn't working for you unless you've had some integers sneak into some of your document fields. Have you tried having a $-sign in front of your concatenated values? as in, '$personalInformation.givenName'? Are you sure every single familyName and givenName is a string, not a double, in your collection? All it takes is one double for your $concat to fold.
In any case, I had a similar type mismatch problem with actual doubles. $concat indeed supports only strings, and usually, all you'd do is cast any non-strings to strings.. but alas, at the time of this writing MongoDB 3.6.2 does not yet support integer/double => string casting, only date => string casting. Sad face.
That said, try adding this projection hack at the top of your query. This worked for me as a typecast. Just make sure you provide a long enough byte length (128-byte name is pretty long so you should be okay).
```
{
$project: {
castedGivenName: {
$substrBytes: [ 'personalInformation.givenName', 0, 128 ]
},
castedFamilyName: {
$substrBytes: [ 'personalInformation.familyName', 0, 128 ]
}
},
{
$project: {
fullName: {
$concat: [
'$castedGivenName',
'$castedFamilyName'
]
}
}
},
{
$match: { fullName: { 'active': true, $regex: param, $options: 'i' } }
}
```
Upvotes: 0 <issue_comment>username_3: I also got this error and then discovered that indeed one of the documents in the collection was to blame. They way I fished it out was by filtering by field type as [explained in the docs](https://docs.mongodb.com/manual/reference/operator/query/type/):
```
db.addressBook.find( { "zipCode" : { $type : "double" } } )
```
I found the field had the value `NaN`, which to my eyes wouldn't be a number, but mongodb interprets it as such.
Upvotes: 1
|
2018/03/15
| 450 | 1,577 |
<issue_start>username_0: How do I Deserialize the following. The problem is that the variable name is a number. So how should MyClass be defined?
```
json_str:
{"23521952": {"b": [], "o": []}, "23521953": {"b": [], "o": []}}
class MyClass { //? };
var var = JsonConvert.DeserializeObject(json\_str);
```<issue_comment>username_1: This sounds like the outer object is actually a dictionary:
```
using System.Collections.Generic;
using Newtonsoft.Json;
class Foo
{
// no clue what b+o look like from the question; modify to suit
public int[] b { get; set; }
public string[] o { get; set; }
}
static class P
{
static void Main()
{
var json = @"{""23521952"": {""b"": [], ""o"": []}, ""23521953"": {""b"": [], ""o"": []}}";
var obj = JsonConvert.DeserializeObject>(json);
foreach(var pair in obj)
{
System.Console.WriteLine($"{pair.Key}, {pair.Value}");
}
}
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can use anonymous type deserialization for your data like this, without creating classes for properties of JSON. Hope it works.
```
var finalResult=JsonConvert.DeserializeAnonymousType(
json_str, // input
new
{
Id=
{
new
{
b=new[], o=new[]
}
}
}
);
foreach(var id in finalResult.Id)
{
console.write(id); // gives ids like 23521952
console.write(id.b[0]) // gives first elemnt in 'b' array
}
```
Upvotes: 1
|
2018/03/15
| 840 | 2,459 |
<issue_start>username_0: In [this tutorial](http://thegrantlab.org/bio3d/tutorials/trajectory-analysis), there is a command `pymol.dccm(cij, pdb, type="launch")`. But I was told
```
> pymol.dccm(cij, pdb, type="launch")
Error in pymol.dccm(cij, pdb, type = "launch") :
Launching external program failed
make sure 'C:/python27/PyMOL/pymol.exe' is in your search path
In addition: Warning message:
running command 'C:/python27/PyMOL/pymol.exe -cq' had status 127
```
I already have `pymol` installed on my PC. Can I ask how to add another search path to R?
Now I think `pymol` is a sub-package in `bio3d`. But I already installed `bio3d` and other commands can work (e.g. `pdb <- read.pdb()`). But why the `pymol` command could not work?
---
I tried
`> .libPaths("path/to/pymol2/")`
`> .libPaths("path/to/pymol2/PyMOL")`
```
> .libPaths("path/to/pymol2/PyMOL/PyMOLWin.exe")
> pymol.dccm(cij, pdb, type="launch")
Error in pymol.dccm(cij, pdb, type = "launch") :
Launching external program failed
make sure 'C:/python27/PyMOL/pymol.exe' is in your search path
In addition: Warning message:
running command 'C:/python27/PyMOL/pymol.exe -cq' had status 127
> PyMOLWin.dccm(cij, pdb, type="launch")
Error: could not find function "PyMOLWin.dccm"
```
So the `.libPaths` did not return error. But `pymol.dccm` and `PyMOLWin.dccm` did not work.
---
I also tried to install `pymol` package in `R`
```
> install.packages("pymol")
Warning in install.packages :
package ‘pymol’ is not available (for R version 3.2.2)
```<issue_comment>username_1: [`.libPaths("path/to/package/library")`](https://www.rdocumentation.org/packages/base/versions/3.4.3/topics/libPaths) probably does what you need.
>
> .libPaths gets/sets the library trees within which packages are looked for.
>
>
>
Set the path to the parent directory of the directory with the package name rather than the package directory itself.
Upvotes: -1 <issue_comment>username_2: There's a mistake in the tutorial command itself. The correct syntax for dccm is
```
pymol(cij, pdb, type="launch",exefile="C:/Program Files/pymol")
```
where `exefile = file path` to the ‘PYMOL’ program on your system (i.e. how is ‘PYMOL’ invoked). If NULL, use OS-dependent default path to the program.
Upvotes: 0 <issue_comment>username_3: Try the following code, it worked perfectly for me:
```
pymol(cm, pdb.open, type="launch", exefile="%userprofile%/PyMOL/PyMOLWin.exe")
```
Upvotes: 0
|
2018/03/15
| 425 | 1,690 |
<issue_start>username_0: How much time does a `Thread` need to stop/disappear after its `interrupt()` method has been called?
Consider the following code:
```
public class MyThread {
public boolean flag = true;
public void run() {
while(flag) {
doSomething();
Thread.sleep(20);
}
}
}
void foo() {
MyThread t = new MyThread();
t.start();
Thread.sleep(100);
t.flag = false;
t.interrupt();
}
```
Does the assignment `t.flag = false;` have any effect? In other words, can a thread exit its `run()` method and terminate "normally" before it is interrupted?
[similar question](https://stackoverflow.com/a/130320/5138179)<issue_comment>username_1: For sharing data one needs `volatile`. Better would be to catch the InterruptedException.
```
public class MyThread {
public volatile boolean flag = true;
public void run() {
try {
while(flag) {
doSomething();
Thread.sleep(20);
}
} catch (InterruptedException ie) {
...
}
}
}
```
Upvotes: 2 <issue_comment>username_2: Check agains isInterrupted, anhd throw it again since when it returns true it consumes the message.
```
public class MyThread {
public void run() {
try {
while(!Thread.isInterrupted()) {
doSomething();
Thread.sleep(20);
}
} catch (InterruptedException ie) {
Thread.interrupt();
}
}
}
```
Making the flag unecessary.
If you want to use the flag and finish the code gracefully you don't need to use interrupt and catch the Exception.
Upvotes: 0
|
2018/03/15
| 568 | 1,995 |
<issue_start>username_0: I am having some problems with my program.
I am creating a script which displays all the properties in the properties table in our database and one row in the table should show available or sold. it should check if the `puserId` is null or 1 or more
```
//Execute the Query
$records = mysqli_query($con, $sql);
while($row = mysqli_fetch_array($records)){
echo "|";
echo " ".$row['propertyId']." |";
echo " ".$row['propNum']." |";
echo " ".$row['AptNum']." |";
echo " ".$row['street']." |";
if (empty(['puserId'])) {
$status = 'Available';
}else {
$status = 'Sold';
}
echo " ".$status." |";
```
When i use this it shows that all properties are sold, also the ones wich have a `puserId` of `null`. It doesn't give me any errors though.
Anybody knows how I should do this?
Thanks in advance :)<issue_comment>username_1: Replace:
```
if (empty(['puserId'])) {
```
with:
```
if (empty($row['puserId'])) {
```
in order to fix the missing variable `$row` typo (your actual code checks whether the array `['puserId']` is null, which is obviously false) . If your field is either null or numeric, you can also write that as:
```
if (is_null($row['puserId'])) {
```
but my advice is to avoid using empty/null values in table fields related to identifiers. Just give them a default value of 0 to make everything easier. At that point you could write your check as:
```
if ($row['puserId'] == 0) {
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Other than the `$row['puserId']` typo, `empty` doesn't really look useful there. It includes an `isset` check that is pointless considering you know that column exists. You aren't checking that any of the other columns are set before you use them. There's really no need to check that one either. Just evaluate it directly. `null` or `0` values will evaluate as `false` in the if condition..
```
if ($row['puserId']) {
$status = 'Sold';
} else {
$status = 'Available';
}
```
Upvotes: 0
|
2018/03/15
| 832 | 2,049 |
<issue_start>username_0: I have this problem. My dataset *a* contains one column badly formatted containing characters, letters and punctuation. I would like to separate column *Unit\_Wrong* in two columns *num* and *text*.
Here is dataset a:
```
a <- data.frame(Measure = c(10000, 2000, 10000, 15000, 40000, 0),
Unit_Wrong = c("10L","25.5mL","30.5 mL","40OUNCES","3X", "NO_SIZE"),
stringsAsFactors = FALSE)
```
My expected outcome is *b*:
```
b <- data.frame(Measure = c(10000, 2000, 10000, 15000, 40000, 0),
Unit_Wrong = c("10L","25.5mL","30.5 mL","40OUNCES","3X", "NO_SIZE"),
text = c("L", "mL", "ml", "OUNCES", "X", "NO_SIZE"),
num = c("10","25.5","30.5","40","3", ""),
stringsAsFactors = FALSE)
```
I tried with this but it doesn't work:
```
attempt <- a %>%
mutate(text = gsub("[[:digit:]]","", Unit_Wrong)) %>%
mutate(num = str_replace_all(Unit_Wrong, text, ""))
```
Can you help?<issue_comment>username_1: ```
a %>%
mutate(text = stringr::str_extract(Unit_Wrong,"[A-z]+$")) %>%
mutate(num = stringr::str_extract(Unit_Wrong,"(\\d\\.?)+") %>% as.numeric)
```
Output:
```
Measure Unit_Wrong text num
1 10 10L L 10
2 2000 25.5mL mL 25.5
3 10000 30.5 mL mL 30.5
4 15 40OUNCES OUNCES 40
5 40 3X X 3
6 0 NO_SIZE NO_SIZE
```
Note:
If you have special characters for unit like "µ" etc. You need to add those in
inside `[A-z]` like `[A-zµ]` and so on.
Upvotes: 2 <issue_comment>username_2: Here's an R base soluction using `gsub`
```
> text <- gsub("\\d*\\s*\\.*", "", a$Unit_Wrong)
> num <- as.numeric(gsub("\\s*[[A-Za-z]]*_*", "", a$Unit_Wrong))
> data.frame(a, text, num)
Measure Unit_Wrong text num
1 10000 10L L 10.0
2 2000 25.5mL mL 25.5
3 10000 30.5 mL mL 30.5
4 15000 40OUNCES OUNCES 40.0
5 40000 3X X 3.0
6 0 NO_SIZE NO_SIZE NA
```
Upvotes: 1
|
2018/03/15
| 692 | 2,165 |
<issue_start>username_0: I've just opened my project and I couldn't build it.
Here is my Gradle console output:
```
18:12:07.274 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter]
18:12:07.274 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] FAILURE: Build failed with an exception.
18:12:07.274 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter]
18:12:07.274 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] * What went wrong:
18:12:07.274 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] Execution failed for task ':app:mergeDebugResources'.
18:12:07.274 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > Error: java.lang.RuntimeException: com.google.gson.stream.MalformedJsonException: Use JsonReader.setLenient(true) to accept malformed JSON at line 1 column 1 path $
18:12:07.274 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter]
18:12:07.274 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] * Try:
18:12:07.274 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] Run with --stacktrace option to get the stack trace.
18:12:07.274 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter]
18:12:07.274 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] * Get more help at https://help.gradle.org
18:12:07.274 [ERROR] [org.gradle.internal.buildevents.BuildResultLogger]
18:12:07.274 [ERROR] [org.gradle.internal.buildevents.BuildResultLogger] BUILD FAILED in 4s
```
Everything worked well when I worked on it yesterda. What should I do?
Thanks in advance!<issue_comment>username_1: What JSON file is giving the error? The file might have been corrupted.
If it is the JSON file provided by `Firebase` or `Crashlytics` or some other library, you can re-download and replace the file after cleaning your project..
Upvotes: 0 <issue_comment>username_2: In all seriousness, use Maven - Gradle is not backwards compatible with itself. Gradle developers do not understand that they have a contract with developers and vX is always breaking vX-1 or, if you must use Gradle, then use v5.6.4
Upvotes: -1
|
2018/03/15
| 524 | 2,070 |
<issue_start>username_0: I was successfully able to integrate Dialogflow chatbot as an APP in Slack and it is accessible to chat using the APP tab.
However, for it to respond to messages in channels by mentioning like `@bot hello`
I realized that I have to add the **app\_mention** event into **Subscribe to Bot Events** form.
That should work according to the documentation but it didn't in this case. so I started to wonder if that ***event*** isn't compatible with Dialogflow or if there was something missing in the documentation.
[](https://i.stack.imgur.com/a7cvN.png)
Please advise. Thanks!<issue_comment>username_1: **The `app_mention` event is not supported by Dialogflow.**
I reached out to both Slack's and Dialogflow's customer service after experiencing the same issues as you.
Slack checked on their side and even took a look at the logs for my bot user and saw that everything seemed to be sent to Dialogflow just fine.
Dialogflow on the other hand answered this:
>
> At this time, the Slack integration of Dialogflow is ideal only for
> direct message. Bot will respond to any messages with or without a
> mention.
>
>
> Unfortunately, we can’t disclose details about our releases until
> they’re live. We announce all new features in our Change Log:
> <https://dialogflow.com/support/change-log>. Stay tuned!
>
>
>
I just hope they'll add support for this some time soon. It's annoying not to have that feature, because now the bot user will either interfere with everything in a channel or only support direct messages. **It might help the more people take the time to contact Dialogflow support at <https://console.dialogflow.com/api-client/#/support> .**
Upvotes: 4 [selected_answer]<issue_comment>username_2: `app_mention` is now supported by dialogflow. All you need to do is subscribe to the following bot events - `app_mention` and `message.group` in Slack. And in dialogflow, under slack integration, uncheck `Process all messages` check box.
Upvotes: 2
|
2018/03/15
| 1,597 | 6,164 |
<issue_start>username_0: I have a c# application that works locally but it wont work if I publish my program on another machine. I get an error saying the server name cannot be found.
>
> An unhandled exception of type 'System.Data.SqlClient.SqlException'
> occurred in ProductionOrderQuery.exe
>
>
> Additional information: A network-related or instance-specific error
> occurred while establishing a connection to SQL Server. The server was
> not found or was not accessible. Verify that the instance name is
> correct and that SQL Server is configured to allow remote connections.
> (provider: SQL Network Interfaces, error: 26 - Error Locating
> Server/Instance Specified)
>
>
>
Both machines are on the same corporate domain network. it may be something to do with how I am specifying my server name? Do I have to simply write the server name or do i need the fully qualified domain name?
The connection string is
```
Server=hostname\\SQLEXPRESS;Database=mydb;Trusted_Connection=True;
```
I'm using SS auth with sa account.
Do I need to do anything on SSMS to get this to work?
Edit: So I've since followed guides on how to connect by IP Address to see if it was DNS screwing it up. I have configured named pipes and static ports and changed my connection string. Firewall is disabled Still can't connect remotely, everything works ok locally. I did manage to get one machine to connect remotely and that's it, very puzzling.
New connection string is Data Source =
`,1433; Network Library = DBMSSOCN;Initial Catalog = footfall; User ID = sa; Password=<PASSWORD>;connection timeout=0;`
I can ping the server from the machine I'm trying to connect from and nslookup reports correct hostname. I can run a trace from client to server fine. Firewall is disabled on server for testing.
Edit 2: When somebody signs in to a remote machine that isn't me, they cant connect. SO it seems there is still some kind of windows authentication going on here despite me specifically using the SA account. When I sign in, I can connect ok.
SOLUTION: Bit of a wild goose chase here but turns out I had used a data source wizard to populate a combobox in design view that was using a different connection string.
You cant just remove datasources either from VS which is very annoying. Took me a while to remove them.
I don't understand why though if the data sources are not bound to anything on the form, the app still insists on connecting to them at runtime.
Once I managed to remove the data sources, and not use the SA account (I created a new account with read write perms on my database) the application worked.
I dont understand why I cant add the sa account to my database (receive an error when I do so) but a custom account I can add fine.
So to summarise, in the end there were 2 issues, the fact I was using the SA account and the hidden connection string in one of the data sources linked to the combo box
Thanks.<issue_comment>username_1: Please go to the SQL Server Management Studio and click connect button to connect with database engine. After that copy server name and past it to place of server name in the connection string.
follow the image
[](https://i.stack.imgur.com/WYo4T.png)
Thanks
Upvotes: 0 <issue_comment>username_2: lets take your server name:
```
DESKTOP\SQLEXPRESS
```
then you can avoid your machine name in connection string like this.
```
Data Source=.\SQLEXPRESS
```
Thank You!
Upvotes: 0 <issue_comment>username_3: You can try this ...
Confirm that server authentication is in mixed mode, like in image bellow.
[](https://i.stack.imgur.com/9duek.png)
Confirm that sql server is allowing for remote connections:
[](https://i.stack.imgur.com/vweX2.png)
Confirm that sa user is in your database users list and/or is associated with your database dbo role. Just in case, also try to check if sa is db\_owner for that database.
I would continue using a default connection string:
data source=server\SQLEXPRESS;initial catalog=db;integrated security=false;user id=user;password=pwd;**Persist Security Info=True;**
The persist security is an important part when using user/password to authenticate.
Another thing is that in your question you are adding 2 slashes to the SQLEXPRESS instance, when it actually is just one.
Upvotes: 0 <issue_comment>username_4: I had the same issue sometime. This is what worked for me:
Check the settings in the SQL Server Configuration Manager
>
> "SQL Server Configuration Manager"
>
>
> Click on "SQL Server Network Configuration" and Click on
> "Protocols for Name"
>
>
> Right Click on "TCP/IP" (should be Enabled). Click on Properties
>
>
> On "IP Addresses" Tab go to the last entry "IP All"
>
>
> Enter "TCP Port" 1433.
>
>
> Now Restart "SQL Server .Name." using "services.msc" (winKey + r)
>
>
>
Upvotes: 0 <issue_comment>username_5: I guess this is a Firewall Issue. Disable any **antivirus** or firewall installed in the second machine and check out if you are able to connect.
Upvotes: 0 <issue_comment>username_6: Please check "SQL Server(MSSQLSERVER)" service on second machine it should be started.
You can type services.msc in run window and find "SQL Server(MSSQLSERVER)" service name and just make sure it is started.[enter image description here](https://i.stack.imgur.com/6pWcm.png)
Upvotes: 1 <issue_comment>username_7: Bit of a wild goose chase here but turns out I had used a data source wizard to populate a combobox in design view that was using a different connection string.
You cant just remove datasources either from VS which is very annoying. Took me a while to remove them.
I dont understand why though if the data sources are not bound to anything on the form, the app still insists on connecting to them at runtime.
Once I managed to remove the data sources, and not use the SA account (I created a new account with read write perms on my database) the application worked.
Upvotes: 1 [selected_answer]
|
2018/03/15
| 670 | 1,945 |
<issue_start>username_0: To give an example, let's assume I have a type `foo` of the following shape,
```
type foo = shape(
?'bar' => float,
...
);
```
Now if I try to access value of field `bar` in the following way,
```
do_something_with($rcvd['bar']);
```
where `$rcvd` is of `foo` type, it doesn't work as `bar` is an optional member and might not exist for the instance `$rcvd`.
So for this given example, the question would be - how to access the member `bar` of `$rcvd`?<issue_comment>username_1: Ok, found it: <https://docs.hhvm.com/hack/reference/class/HH.Shapes/idx/>
So the correct way is,
```
Shapes::idx($rcvd, 'bar');
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can use the `Shapes::idx` method as you mentioned in your answer.
but you can also use null coalesce operator, which you might be more familiar with from other programming languages such as [PHP](https://wiki.php.net/rfc/isset_ternary) and [C#](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/null-coalescing-operator).
see : <https://docs.hhvm.com/hack/expressions-and-operators/coalesce>
example :
```php
type Foo = shape(
'bar' => ?int,
?'baz' => int,
);
function qux(Foo $foo): int {
$default = 1;
// foo[bar] is null ? set default.
$bar = $foo['bar'] ?? $default;
// foo[baz] doesn't exist ? set default.
$baz = $foo['baz'] ?? $default;
return $baz + $bar;
}
<<__EntryPoint>>
async function main(): Awaitable {
echo qux(shape(
'bar' => null,
)); // 2
echo qux(shape(
'bar' => 0,
)); // 1
echo qux(shape(
'bar' => 1,
)); // 2
echo qux(shape(
'bar' => 4,
'baz' => 2,
)); // 6
echo qux(shape(
'bar' => null,
'baz' => 0
)); // 1
}
```
Hack also support null coalesce equal operator.
example :
```php
// foo[bar] is null ? set foo[bar] to default
$foo['bar'] ??= $default;
// foo[baz] is missing ? set foo[baz] to default
$foo['baz'] ??= $default;
```
Upvotes: 1
|
2018/03/15
| 2,191 | 7,534 |
<issue_start>username_0: I've read through different resources but feel like I'm missing something. I'm explicitly calling the destructor when a variable (numItems) reaches zero. I have a loop that prints all the class objects (students), but while any object that had the destructor called on it has a blank first and last name, the ID and numItems variables still exist. Am I misunderstanding how the destructor works? Why would it delete some but not all of the member attributes?
Also, the "items" are stored in a dynamic array. I can set and access them as long as the array is public. But even using setters, the program crashes if I attempt to populate a private array.
Header:
```
#ifndef STUDENT_H
#define STUDENT_H
#include
#include
#include
#include
#define ARRAY\_MAX 15
using namespace std;
class Student
{
private:
string firstName, lastName;
unsigned int ID, numItems = 0;
typedef string\* StringPtr;
//StringPtr items;
public:
int capacity = 15;
string \*items = new string[capacity];
Student();
Student(const unsigned int id, const string fName, const string lName);
string getfName() const;
string getlName() const;
unsigned int getnumItems() const;
string getItem(int num);
unsigned int getID() const;
void setfName(string fname);
void setlName(string lname);
void setID(unsigned int id);
void setItem(string str, int num);
int CheckoutCount();
bool CheckOut(const string& item);
bool CheckIn(const string& item);
bool HasCheckedOut(const string& item);
void Clear();
~Student();
//const Student operator+(string rhs);
//void operator+=(string rhs);
//bool operator==(Student rhs);
friend istream& operator>>(istream& input, Student& stu);
friend ostream& operator<<(ostream& output, const Student& stu);
};
#endif // STUDENT\_H
```
Definitions:
```
#include "student.h"
using namespace std;
Student::Student()
{
}
Student::Student(const unsigned int id, const string fName, const string lName)
{
firstName = fName;
lastName = lName;
ID = id;
}
string Student::getfName() const
{
return firstName;
}
string Student::getlName() const
{
return lastName;
}
unsigned int Student::getnumItems() const
{
return numItems;
}
string Student::getItem(int num)
{
return items[num];
}
unsigned int Student::getID() const
{
return ID;
}
void Student::setfName(string fname)
{
firstName = fname;
}
void Student::setlName(string lname)
{
lastName = lname;
}
void Student::setID(unsigned int id)
{
if ((id >= 1000) && (id <= 100000))
{
ID = id;
}
else
{
cout << "Attempted ID for " << firstName << " " << lastName << " is invalid. Must be between 1000 and 100,000." << endl;
}
}
void Student::setItem(string str, int num)
{
items[num] = str;
}
int Student::CheckoutCount()
{
return numItems;
}
bool Student::CheckOut(const string & item)
{
if (this->HasCheckedOut(item) == true)
{
return false; // already found item in list, CheckOut failed...
}
else
{
items[numItems] = item;
numItems++;
return true; // CheckOut successful
}
}
bool Student::CheckIn(const string & item)
{
for (int i = 0; i < numItems; i++)
{
if (items[i] == item)
{
for (; i < numItems - 1; i++)
{
// Assign the next element to current location.
items[i] = items[i + 1];
}
// Remove the last element as it has been moved to previous index.
items[numItems - 1] = "";
numItems = numItems - 1;
if (numItems == 0)
{
this->~Student();
}
return true;
}
}
return false;
}
bool Student::HasCheckedOut(const string & item)
{
string *end = items + numItems;
string *result = find(items, end, item);
if (result != end)
{
return true; // found value at "result" pointer location...
}
else
return false;
}
void Student::Clear()
{
ID = 0;
firstName = "";
lastName = "";
delete[] items;
}
Student::~Student()
{
}
istream & operator>>(istream & input, Student & stu)
{
string temp;
input >> stu.ID >> stu.firstName >> stu.lastName >> stu.numItems;
int loopnum = stu.numItems;
if (loopnum > 0)
{
for (int i = 0; i < loopnum; i++)
{
input >> temp;
stu.setItem(temp, i);
}
}
return input;
}
ostream & operator<<(ostream & output, const Student & stu)
{
string s = stu.firstName + " " + stu.lastName;
output << setw(8) << stu.ID << setw(16) << s << setw(8) << stu.numItems;
int loopnum = stu.numItems;
if (loopnum > 0)
{
for (int i = 0; i < loopnum; i++)
{
output << stu.items[i] << " ";
}
}
output << endl << endl;
return output;
}
```
Main:
```
#include
#include
#include
#include
#include "student.h"
using namespace std;
void fcheck(ifstream &mystream);
int main()
{
ifstream sin("students.txt"); // File input/output variables
ifstream fin("checkins.txt");
ifstream chin("checkouts.txt");
ofstream sout("UpdatedStudentsC.txt");
Student stu1, stu2;
fcheck(sin);
fcheck(fin);
fcheck(chin);
typedef Student\* StuPtr;
StuPtr studentList;
int stud\_capacity = 50;
studentList = new Student[stud\_capacity];
int num\_studs = 0;
sout << std::setiosflags(std::ios::left); // justify output to format properly.
while (sin.good()) // While there's data in the file, do stuff.
{
sin >> stu1;
stu1.CheckIn("Towel");
stu1.CheckIn("Locker");
if (stu1.getnumItems() == 0)
{
stu1.~Student();
}
studentList[num\_studs] = stu1;
num\_studs++;
sout << stu1;
}
for (int i = 0; i < 16; i++)
{
cout << studentList[i].getID() << " " << studentList[i].getfName() << " " << studentList[i].getnumItems() << endl;
}
system("pause");
// Close files
fin.close();
chin.close();
sout.close();
sin.close();
// Quit without error
return 0;
}
void fcheck(ifstream &mystream)
{
if (!mystream) // If we can't find the input file, quit with error message.
{
cout << "file not opened!" << endl;
system("pause");
exit(1);
}
}
```<issue_comment>username_1: A destructor deletes nothing if you don't tell it to (just like now, your destructor is empty). A destructor is meant to perform some cleanup before the object is deallocated, but it does not deallocate.
Destructors are implicitly invoked
* when a stack object goes out of scope,
* when a heap object is deallocated with `delete`.
Upvotes: 1 <issue_comment>username_2: >
> while any object that had the destructor called on it has a blank first and last name, the ID and numItems variables still exist.
>
>
>
None of the member variables of the destroyed object exist anymore.
There is no way to inspect in C++ whether an object has been destroyed or not. When you try to access the members of the destroyed object, the program will have undefined behaviour. One possible behaviour might be something that you might expect from a destroyed object, and another possible behaviour might be something that you didn't expect.
---
You call the destructor of a local automatic variable `stu1`, but you don't construct a new object in place. When the variable goes out of scope, it is destroyed "again". This also causes the program to have undefined behaviour. Needing to call a destructor explicitly is very rare case, and this is not such case.
Upvotes: 0
|
2018/03/15
| 1,689 | 5,079 |
<issue_start>username_0: I wrote this program:
```
// splits a sentence into words
#include
#include
#include
#include "spacefunc.h"
using std::string;
using std::cout;
using std::endl;
using std::find\_if;
int main() {
typedef string::const\_iterator iter;
string input = "This is me";
iter i = input.begin();
while (i != input.end()) {
iter j;
i = find\_if(i, input.end(), notspace);
j = find\_if(i, input.end(), is\_space);
cout << string(i, j) << endl;
i = j;
}
return 0;
}
```
It fails with following errors:
```
word_splitter.cpp: In function ‘int main()’:
word_splitter.cpp:21:45: error: no matching function for call to ‘find_if(iter&, std::__cxx11::basic_string::iterator, bool (&)(char))’
i = find\_if(i, input.end(), notspace);
^
In file included from /usr/include/c++/5/algorithm:62:0,
from word\_splitter.cpp:4:
/usr/include/c++/5/bits/stl\_algo.h:3806:5: note: candidate: template \_IIter std::find\_if(\_IIter, \_IIter, \_Predicate)
find\_if(\_InputIterator \_\_first, \_InputIterator \_\_last,
^
/usr/include/c++/5/bits/stl\_algo.h:3806:5: note: template argument deduction/substitution failed:
word\_splitter.cpp:21:45: note: deduced conflicting types for parameter ‘\_IIter’ (‘\_\_gnu\_cxx::\_\_normal\_iterator >’ and ‘\_\_gnu\_cxx::\_\_normal\_iterator >’)
i = find\_if(i, input.end(), notspace);
^
word\_splitter.cpp:22:45: error: no matching function for call to ‘find\_if(iter&, std::\_\_cxx11::basic\_string::iterator, bool (&)(char))’
j = find\_if(i, input.end(), is\_space);
^
In file included from /usr/include/c++/5/algorithm:62:0,
from word\_splitter.cpp:4:
/usr/include/c++/5/bits/stl\_algo.h:3806:5: note: candidate: template \_IIter std::find\_if(\_IIter, \_IIter, \_Predicate)
find\_if(\_InputIterator \_\_first, \_InputIterator \_\_last,
^
/usr/include/c++/5/bits/stl\_algo.h:3806:5: note: template argument deduction/substitution failed:
word\_splitter.cpp:22:45: note: deduced conflicting types for parameter ‘\_IIter’ (‘\_\_gnu\_cxx::\_\_normal\_iterator >’ and ‘\_\_gnu\_cxx::\_\_normal\_iterator >’)
j = find\_if(i, input.end(), is\_space);
```
---
If I change `i, j` to `iterator` type, it compiles.
What am I doing wrong, as I am pretty sure that `find_if` accepts `const_iterator` type arguments?
**EDIT**
If this is the issue of `i` being a `const_iterator` and input.end() being an `iterator`, why does the following code work? This is from `Accelerated C++`.
```
vector < string > split(const string & str) {
typedef string::const_iterator iter;
vector < string > ret;
iter i = str.begin();
while (i != str.end()) {
// ignore leading blanks
i = find_if(i, str.end(), not_space);
// find end of next word
iter j = find_if(i, str.end(), space);
// copy the characters in [i, j)
if (i != str.end())
ret.push_back(string(i, j));
i = j;
}
return ret;
}
```<issue_comment>username_1: `find_if` accepts non-const iterators and const\_iterators; however, the iterators that you pass to it have to be the same type. The problem here is that `input.end()` returns a non-const iterator, because `input` is not a const object. That's not the same type as the const iterator 'i'. To get a const end iterator for a non-const object (or for a const object, but that's a distraction), use `input.cend()`.
Upvotes: 3 <issue_comment>username_2: [`find_if`](http://en.cppreference.com/w/cpp/algorithm/find)'s signature looks like this:
>
>
> ```
> template
> InputIt find\_if(InputIt first, InputIt last, UnaryPredicate p);
>
> ```
>
>
It expects its first 2 arguments to be of the same type. With `find_if(i, input.end(), notspace)`, if `i` is a `string::const_iterator`, it's not the same type as [`input.end()`](http://en.cppreference.com/w/cpp/string/basic_string/end) which would be a `string::iterator` since `input` is non-const. If `input` was a `const std::string`, `input.end()` would return a `string::const_iterator`
---
Post C++11, it's uncommon to see a `typedef string::const_iterator iter`. Use of `auto` is more common:
```
string input = "This is me";
auto i = input.begin();
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: Others have explained why you experienced a compilation error. In my view, future mistakes can be avoided if you express more directly your "ultimate" intention, and let automatic type deduction take care of the rest. For example: if you want your input to be immutable, use `const` to mark it as such, and let `auto` take care of the rest.
```
#include
#include
#include
#include
bool is\_space(char c) { return std::isspace(c); }
bool is\_not\_space(char c) { return not is\_space(c); }
int main() {
const std::string input{"This is me"};
for (auto it = input.begin(); it != input.end();) {
it = std::find\_if(it, input.end(), is\_not\_space);
auto it\_end = std::find\_if(it, input.end(), is\_space);
std::cout << std::string(it, it\_end) << "\n";
it = it\_end;
}
}
```
Sample run:
```
$ clang++ example.cpp -std=c++17 -Wall -Wextra
$ ./a.out
This
is
me
```
Upvotes: 3
|
2018/03/15
| 760 | 2,216 |
<issue_start>username_0: I am building a file uploader,
using [Vue Dropzone](https://rowanwins.github.io/vue-dropzone/) on the frontend,
and custom PHP on the backend.
My frontend script is sending a request with following headers:
>
> **Request headers**
>
> POST /jobimport HTTP/1.1
>
> Host: myurl
>
> Connection: keep-alive
>
> Content-Length: 765309
>
> Origin: <http://localhost:8080>
>
> User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10\_13\_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36
>
> Content-Type: multipart/form-data; boundary=----
>
> WebKitFormBoundaryhaaAoTz2J5iipi3M
>
> Accept: application/json
>
> Cache-Control: no-cache
>
> X-Requested-With: XMLHttpRequest
>
> Referer: <http://localhost:8080/import>
>
> Accept-Encoding: gzip, deflate, br
>
> Accept-Language: en-US,en;q=0.9,nl;q=0.8,de;q=0.7,fr;q=0.6
>
>
>
In my .htaccess file on the backend, I have added the following lines:
```
Header set Access-Control-Allow-Origin "*"
Header set Access-Control-Allow-Headers "*"
```
When using Chrome, the file uploads without problems.
When looking at the request headers, I even see the following:
>
> **Response headers**
>
> (...)
>
> Access-Control-Allow-Headers: \*
>
> Access-Control-Allow-Origin: \*
>
> (...)
>
>
>
However, when using Safari, the upload fails, and I get the following error:
>
> Failed to load resource: Request header field Cache-Control is not allowed by Access-Control-Allow-Headers.
>
> XMLHttpRequest cannot load <https://myurl>. Request header field Cache-Control is not allowed by Access-Control-Allow-Headers.
>
>
>
I don't understand how this works in Chrome, but not in Safari.<issue_comment>username_1: The comments by @sideshowbarker and @roryhewitt are correct,
Safari indeed doesn't support a wildcard `*` for `Access-Control-Allow-Headers`.
I listed all headers explicitly instead of using a wildcard, and now it works perfectly.
Upvotes: 5 [selected_answer]<issue_comment>username_2: If parent domain URL is in https, you must call the ajax URL also in https. If not, not use https. Hope this will help
Upvotes: 1
|
2018/03/15
| 233 | 977 |
<issue_start>username_0: I want to attach click event handlers to all the elements in a page that I don't control (e.g.- stackoverflow homepage). Capturing clicks by assigning click handlers to every element is a tedious task. So I decided to do this instead:
```
window.addEventListener("click", function (e) {
//code
});
```
Is it guaranteed that my event handler will finish execution before something happens that makes the element inaccessible, for example element's own click handler deleting its parent from DOM?<issue_comment>username_1: The comments by @sideshowbarker and @roryhewitt are correct,
Safari indeed doesn't support a wildcard `*` for `Access-Control-Allow-Headers`.
I listed all headers explicitly instead of using a wildcard, and now it works perfectly.
Upvotes: 5 [selected_answer]<issue_comment>username_2: If parent domain URL is in https, you must call the ajax URL also in https. If not, not use https. Hope this will help
Upvotes: 1
|
2018/03/15
| 601 | 2,375 |
<issue_start>username_0: I am using [MS Graph example](https://github.com/microsoftgraph/aspnet-connect-rest-sample) to deal with groups. I have modified my code from the mentioned link. I am able to all my operations with admin account but not with normal user. I am using permission scopes as `User.Read Mail.Send Files.ReadWrite Group.ReadWrite.All`. Once run the app with admin account and grant the permissions those permissions are not reflecting to normal user. When normal user signin it is again asking for admin consent. What wrong I am doing?<issue_comment>username_1: There are two types of "consent" in Azure AD land:
* **User Consent**: Asks a User to consent to the app doing x,y,z on their behalf.
* **Admin Consent**: Asks an Admin to consent to "Non-Admin" users executing User Consent.
In other words, "Admin Consent" is *not* the same as an Admin executing "User Consent". All that does is consent to your application operation on behalf of that Admin, it doesn't affect any other users.
What you need here is to execute an Admin Consent operation. This uses a slightly different URL then you currently use to sign in to your app. I'd suggest taking a look at these articles that cover how this works (discloser: I am the author):
* [Understanding the difference between User and Admin Consent](http://massivescale.com/microsoft-v2-endpoint-user-vs-admin/)
* [Obtaining Administrative Consent for your application](http://massivescale.com/microsoft-v2-endpoint-admin-consent/)
Upvotes: 1 <issue_comment>username_2: The issues is happening I think because of sequence. Something wrong with the permissions I had with the app, may be while approving as admin.
So I have deleted the app from app registrations completely and created a new one. Now I have given scopes as `Directory.ReadWrite.All Group.ReadWrite.All`.
Now before opening the app with any other user i have open below url <https://login.microsoftonline.com/common/adminconsent?client_id=a0c2773f-4701-4d9b-b197-c4ca6cb531f9&state=12345&redirect_uri=https://localhost:44329/> for admin consent.
I have approved the necessary permissions and I have opened the app with normal users then it started working.
**Moral:** Sequence of accepting permissions is very important. First always approve the admin consent permissions then remaining things works well
Upvotes: -1 [selected_answer]
|
2018/03/15
| 728 | 2,714 |
<issue_start>username_0: I have a python script that I use to load multiple excel workbooks (1 sheet in each workbook) into a list and then perform a sort on the data.
I would like to import the same workbooks but before loading into the list i would like to select certain rows based on content of a column.
e.g.
[](https://i.stack.imgur.com/VR3Rh.png)
My current script loads all data in, for example I would like to only load in rows were 'A' appears in Column 3.
My current script looks like this:
```
import pandas as pd
import uuid
import xlrd
params = [r'C:\Users\Desktop\Input\1.xlsx',
r'C:\Users\Desktop\Input\2.xlsx',
]
data = []
for param in params:
data.append({'file':param,
'id':str(uuid.uuid4()),
'df':pd.read_excel(param),
})
```<issue_comment>username_1: There are two types of "consent" in Azure AD land:
* **User Consent**: Asks a User to consent to the app doing x,y,z on their behalf.
* **Admin Consent**: Asks an Admin to consent to "Non-Admin" users executing User Consent.
In other words, "Admin Consent" is *not* the same as an Admin executing "User Consent". All that does is consent to your application operation on behalf of that Admin, it doesn't affect any other users.
What you need here is to execute an Admin Consent operation. This uses a slightly different URL then you currently use to sign in to your app. I'd suggest taking a look at these articles that cover how this works (discloser: I am the author):
* [Understanding the difference between User and Admin Consent](http://massivescale.com/microsoft-v2-endpoint-user-vs-admin/)
* [Obtaining Administrative Consent for your application](http://massivescale.com/microsoft-v2-endpoint-admin-consent/)
Upvotes: 1 <issue_comment>username_2: The issues is happening I think because of sequence. Something wrong with the permissions I had with the app, may be while approving as admin.
So I have deleted the app from app registrations completely and created a new one. Now I have given scopes as `Directory.ReadWrite.All Group.ReadWrite.All`.
Now before opening the app with any other user i have open below url <https://login.microsoftonline.com/common/adminconsent?client_id=a0c2773f-4701-4d9b-b197-c4ca6cb531f9&state=12345&redirect_uri=https://localhost:44329/> for admin consent.
I have approved the necessary permissions and I have opened the app with normal users then it started working.
**Moral:** Sequence of accepting permissions is very important. First always approve the admin consent permissions then remaining things works well
Upvotes: -1 [selected_answer]
|
2018/03/15
| 779 | 2,578 |
<issue_start>username_0: I have an array of objects
```
[
{"first_name" :"John", "last_name": "Smith", "course": "course 1"},
{"first_name" :"Joe", "last_name": "Doe", "course": "course 2"},
{"first_name" :"John", "last_name": "Smith", "course": "course 3"}
]
```
How can I grab together unique `first_name` and `last_name` to get:
```
[
{"first_name" :"John", "last_name": "Smith", "course": "course 1"},
{"first_name" :"Joe", "last_name": "Doe", "course": "course 2"}
]
```
in a new array?<issue_comment>username_1: use lodash [Lodash](https://lodash.com/) in its functions *.filter(collection, [predicate=*.identity]) or *.find(collection, [predicate=*.identity], [fromIndex=0])
Upvotes: 0 <issue_comment>username_2: You can try following using [Array.filter](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter)
```js
var arr = [
{"first_name" :"John", "last_name": "Smith", "course": "course 1"},
{"first_name" :"Joe", "last_name": "Doe", "course": "course 2"},
{"first_name" :"John", "last_name": "Smith", "course": "course 3"}
];
var map = {}; // create a map that stores unique combination of fields (first_name and last_name)
arr = arr.filter(function(item){
if(!map[item.first_name + "_" + item.last_name]) {
// store the first occurrence of combination in map and ignore others
map[item.first_name + "_" + item.last_name] = "first_unique_record";
return true;
}
return false;
});
console.log(arr);
```
Upvotes: 1 <issue_comment>username_3: I'm not sure if that is the best code but I solve my problem this way:
```
var dataJson = [
{"first_name" :"John", "last_name": "Smith", "course": "course 1"},
{"first_name" :"Joe", "last_name": "Doe", "course": "course 2"},
{"first_name" :"John", "last_name": "Smith", "course": "course 3"}
];
var tmparr = [];
var newJson = [];
var duplicatesJson = [];
dataJson.forEach((value, index) => {
let a = value.first_name + value.last_name
if(!(this.tmparr.includes(a))){
console.log('unique');
this.tmparr.push(a);
this.newJson.push(value);
} else {
console.log('duplicate')
this.duplicatesJson.push(value);
}
})
console.log(this.newJson);
console.log(this.duplicatesJson)
```
Upvotes: 1 [selected_answer]
|