date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/16
| 1,185 | 4,279 |
<issue_start>username_0: I have a HTML and im creating it with [php](/questions/tagged/php "show questions tagged 'php'"). It contains three fields.
The second field should be filled automatically, when the first one is filled. A HTTP request is triggered over an API. The request works fine and puts out the article number. I tested it already.
The problem is, that it has to run the request and fill the field, whenever the other field is filled.
I have tried it with jQuery:<issue_comment>username_1: >
> why does calling push inside the read method do nothing? The only thing that works for me is just calling readable.push() elsewhere.
>
>
>
I think it's because you are not consuming it, you need to pipe it to an writable stream (e.g. stdout) or just consume it through a `data` event:
```
const { Readable } = require("stream");
let count = 0;
const readableStream = new Readable({
read(size) {
this.push('foo');
if (count === 5) this.push(null);
count++;
}
});
// piping
readableStream.pipe(process.stdout)
// through the data event
readableStream.on('data', (chunk) => {
console.log(chunk.toString());
});
```
Both of them should print 5 times `foo` (they are slightly different though). Which one you should use depends on what you are trying to accomplish.
>
> Furthermore, these articles says you don't need to implement the read method:
>
>
>
You might not need it, this should work:
```
const { Readable } = require("stream");
const readableStream = new Readable();
for (let i = 0; i <= 5; i++) {
readableStream.push('foo');
}
readableStream.push(null);
readableStream.pipe(process.stdout)
```
In this case you can't capture it through the `data` event. Also, this way is not very useful and not efficient I'd say, we are just pushing all the data in the stream at once (if it's large everything is going to be in memory), and then consuming it.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Implement the `_read` method after your ReadableStream's initialization:
```
import {Readable} from "stream";
this.readableStream = new Readable();
this.readableStream.read = function () {};
```
Upvotes: 1 <issue_comment>username_3: From documentation:
**readable.\_read:**
"When readable.\_read() is called, if data is available from the resource, the implementation should begin pushing that data into the read queue using the this.push(dataChunk) method. [link](https://nodejs.org/api/stream.html#stream_readable_read_size_1)"
**readable.push:**
"The readable.push() method is intended be called only by Readable implementers, and only from within the readable.\_read() method. [link](https://nodejs.org/api/stream.html#stream_readable_push_chunk_encoding)"
Upvotes: 3 <issue_comment>username_4: readableStream is like a pool:
* .push(data), It's like pumping water to a pool.
* .pipe(destination), It's like connecting the pool to a pipe and pump water to other place
* The \_read(size) run as a pumper and control how much water flow and when the data is end.
The fs.createReadStream() will create read stream with the \_read() function has been auto implemented to push file data and end when end of file.
The \_read(size) is auto fire when the pool is attached to a pipe. Thus, if you force calling this function without connect a way to destination, it will pump to ?where? and it affect the machine status inside \_read() (may be the cursor move to wrong place,...)
The read() function must be create inside new Stream.Readable(). It's actually a function inside an object. It's not readableStream.read(), and implement readableStream.read=function(size){...} will not work.
The easy way to understand implement:
```
var Reader=new Object();
Reader.read=function(size){
if (this.i==null){this.i=1;}else{this.i++;}
this.push("abc");
if (this.i>7){ this.push(null); }
}
const Stream = require('stream');
const renderStream = new Stream.Readable(Reader);
renderStream.pipe(process.stdout)
```
You can use it to reder what ever stream data to POST to other server.
POST stream data with Axios :
```
require('axios')({
method: 'POST',
url: 'http://127.0.0.1:3000',
headers: {'Content-Length': 1000000000000},
data: renderStream
});
```
Upvotes: 1
|
2018/03/16
| 1,184 | 4,128 |
<issue_start>username_0: I have a query-
```
DELETE FROM invoice
WHERE inv_date < 2018-03-31 - INTERVAL(72) MONTH TO MONTH)
```
when I execute I get an error stating-
**1260: It is not possible to convert between the specified types.**
What's wrong with "2018-03-31"? how should I write so that I can execute and get results?<issue_comment>username_1: >
> why does calling push inside the read method do nothing? The only thing that works for me is just calling readable.push() elsewhere.
>
>
>
I think it's because you are not consuming it, you need to pipe it to an writable stream (e.g. stdout) or just consume it through a `data` event:
```
const { Readable } = require("stream");
let count = 0;
const readableStream = new Readable({
read(size) {
this.push('foo');
if (count === 5) this.push(null);
count++;
}
});
// piping
readableStream.pipe(process.stdout)
// through the data event
readableStream.on('data', (chunk) => {
console.log(chunk.toString());
});
```
Both of them should print 5 times `foo` (they are slightly different though). Which one you should use depends on what you are trying to accomplish.
>
> Furthermore, these articles says you don't need to implement the read method:
>
>
>
You might not need it, this should work:
```
const { Readable } = require("stream");
const readableStream = new Readable();
for (let i = 0; i <= 5; i++) {
readableStream.push('foo');
}
readableStream.push(null);
readableStream.pipe(process.stdout)
```
In this case you can't capture it through the `data` event. Also, this way is not very useful and not efficient I'd say, we are just pushing all the data in the stream at once (if it's large everything is going to be in memory), and then consuming it.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Implement the `_read` method after your ReadableStream's initialization:
```
import {Readable} from "stream";
this.readableStream = new Readable();
this.readableStream.read = function () {};
```
Upvotes: 1 <issue_comment>username_3: From documentation:
**readable.\_read:**
"When readable.\_read() is called, if data is available from the resource, the implementation should begin pushing that data into the read queue using the this.push(dataChunk) method. [link](https://nodejs.org/api/stream.html#stream_readable_read_size_1)"
**readable.push:**
"The readable.push() method is intended be called only by Readable implementers, and only from within the readable.\_read() method. [link](https://nodejs.org/api/stream.html#stream_readable_push_chunk_encoding)"
Upvotes: 3 <issue_comment>username_4: readableStream is like a pool:
* .push(data), It's like pumping water to a pool.
* .pipe(destination), It's like connecting the pool to a pipe and pump water to other place
* The \_read(size) run as a pumper and control how much water flow and when the data is end.
The fs.createReadStream() will create read stream with the \_read() function has been auto implemented to push file data and end when end of file.
The \_read(size) is auto fire when the pool is attached to a pipe. Thus, if you force calling this function without connect a way to destination, it will pump to ?where? and it affect the machine status inside \_read() (may be the cursor move to wrong place,...)
The read() function must be create inside new Stream.Readable(). It's actually a function inside an object. It's not readableStream.read(), and implement readableStream.read=function(size){...} will not work.
The easy way to understand implement:
```
var Reader=new Object();
Reader.read=function(size){
if (this.i==null){this.i=1;}else{this.i++;}
this.push("abc");
if (this.i>7){ this.push(null); }
}
const Stream = require('stream');
const renderStream = new Stream.Readable(Reader);
renderStream.pipe(process.stdout)
```
You can use it to reder what ever stream data to POST to other server.
POST stream data with Axios :
```
require('axios')({
method: 'POST',
url: 'http://127.0.0.1:3000',
headers: {'Content-Length': 1000000000000},
data: renderStream
});
```
Upvotes: 1
|
2018/03/16
| 1,194 | 4,167 |
<issue_start>username_0: ```html
| | |
| --- | --- |
| 1.I have here some long title | Here i have some text |
|
|
```
The problem is that i want to have an vertical text and i used **transform: rotate(270deg)** but it makes the width of **`|`** not functionable. I wan to have an small one. Like:
[Look here i have a picture](https://i.stack.imgur.com/0uaQg.jpg)<issue_comment>username_1: >
> why does calling push inside the read method do nothing? The only thing that works for me is just calling readable.push() elsewhere.
>
>
>
I think it's because you are not consuming it, you need to pipe it to an writable stream (e.g. stdout) or just consume it through a `data` event:
```
const { Readable } = require("stream");
let count = 0;
const readableStream = new Readable({
read(size) {
this.push('foo');
if (count === 5) this.push(null);
count++;
}
});
// piping
readableStream.pipe(process.stdout)
// through the data event
readableStream.on('data', (chunk) => {
console.log(chunk.toString());
});
```
Both of them should print 5 times `foo` (they are slightly different though). Which one you should use depends on what you are trying to accomplish.
>
> Furthermore, these articles says you don't need to implement the read method:
>
>
>
You might not need it, this should work:
```
const { Readable } = require("stream");
const readableStream = new Readable();
for (let i = 0; i <= 5; i++) {
readableStream.push('foo');
}
readableStream.push(null);
readableStream.pipe(process.stdout)
```
In this case you can't capture it through the `data` event. Also, this way is not very useful and not efficient I'd say, we are just pushing all the data in the stream at once (if it's large everything is going to be in memory), and then consuming it.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Implement the `_read` method after your ReadableStream's initialization:
```
import {Readable} from "stream";
this.readableStream = new Readable();
this.readableStream.read = function () {};
```
Upvotes: 1 <issue_comment>username_3: From documentation:
**readable.\_read:**
"When readable.\_read() is called, if data is available from the resource, the implementation should begin pushing that data into the read queue using the this.push(dataChunk) method. [link](https://nodejs.org/api/stream.html#stream_readable_read_size_1)"
**readable.push:**
"The readable.push() method is intended be called only by Readable implementers, and only from within the readable.\_read() method. [link](https://nodejs.org/api/stream.html#stream_readable_push_chunk_encoding)"
Upvotes: 3 <issue_comment>username_4: readableStream is like a pool:
* .push(data), It's like pumping water to a pool.
* .pipe(destination), It's like connecting the pool to a pipe and pump water to other place
* The \_read(size) run as a pumper and control how much water flow and when the data is end.
The fs.createReadStream() will create read stream with the \_read() function has been auto implemented to push file data and end when end of file.
The \_read(size) is auto fire when the pool is attached to a pipe. Thus, if you force calling this function without connect a way to destination, it will pump to ?where? and it affect the machine status inside \_read() (may be the cursor move to wrong place,...)
The read() function must be create inside new Stream.Readable(). It's actually a function inside an object. It's not readableStream.read(), and implement readableStream.read=function(size){...} will not work.
The easy way to understand implement:
```
var Reader=new Object();
Reader.read=function(size){
if (this.i==null){this.i=1;}else{this.i++;}
this.push("abc");
if (this.i>7){ this.push(null); }
}
const Stream = require('stream');
const renderStream = new Stream.Readable(Reader);
renderStream.pipe(process.stdout)
```
You can use it to reder what ever stream data to POST to other server.
POST stream data with Axios :
```
require('axios')({
method: 'POST',
url: 'http://127.0.0.1:3000',
headers: {'Content-Length': 1000000000000},
data: renderStream
});
```
Upvotes: 1
|
2018/03/16
| 667 | 2,531 |
<issue_start>username_0: Ive been searching for the solution of this error since yesterday. I tried everything. I deleted my .idea and .gradle file. I even invalidate cache and restart my android studio. I also re-install my android studio but to no avail. I am still getting this error. I didn't touch my project for about a month. When I came back. I updated my android studio and tried to run my project and was greeted with this error
```
java.lang.RuntimeException: java.lang.RuntimeException: com.android.builder.dexing.DexArchiveMergerException: Error while merging dex archives:
Program type already present: android.support.v13.view.DragStartHelper$1
```
I had multidex enabled on my gradle. A month ago I don't have this problem. Right after updating my android studio I got this. Anyone have a solution? All the libraries I am using should be fine as I was able to release my app without having this error a month ago.<issue_comment>username_1: Okay. So after careful reading from other who had this problem, I needed to exclude the support v13 on gradle following the link [here](https://www.linkedin.com/pulse/how-find-dependencies-particular-dependency-gradle-hesamedin-kamalan-1/)
```
configurations.all {
exclude group: 'com.android.support', module: 'support-v13'
}
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: I couldn't comment because my reputation but the answer below worked fine. The unique add note I recommend is where put it.
```
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
...
configurations.all {
exclude group: 'com.android.support', module: 'support-v13'
}
...
}
```
Further details about how exclude dependencies [here](https://www.linkedin.com/pulse/how-find-dependencies-particular-dependency-gradle-hesamedin-kamalan-1/)
Upvotes: 4 <issue_comment>username_3: I also faced with same problem. In my case:
1. I excluded in my project
`import android.support.v13.app.ActivityCompat`
and changed to
`import android.support.v4.app.ActivityCompat`
and rewrote my code with support v4
2. Next I added in my project:
```
android {
compileSdkVersion 30
buildToolsVersion "30.0.3"
defaultConfig {
applicationId "ru.android.company.myproject"
minSdkVersion 21
targetSdkVersion 30
...
}
...
configurations.all {
exclude group: 'com.android.support', module: 'support-v13'
}
...
}
```
It's solution is working for me.
Upvotes: 1
|
2018/03/16
| 487 | 1,672 |
<issue_start>username_0: I installed `minikube` with the below command:
```
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
```
Then, I start `minikube` cluster using
```
minikube start --vm-driver=none
```
When I try to access the dashboard I see the error
**minikube dashboard**
>
> Could not find finalized endpoint being pointed to by kubernetes-dashboard: Error validating service: Error getting service kubernetes-dashboard: Get <https://10.0.2.15:8443/api/v1/namespaces/kube-system/services/kubernetes-dashboard>: net/http: TLS handshake timeout
>
>
>
I set the proxy using
```
set NO_PROXY=localhost,127.0.0.1,10.0.2.15
```
Still same error.
Any help would be appreciated.<issue_comment>username_1: I had the same issue, I was behind corporate proxy, adding `minikube ip` to the `no_proxy` env variable on host machine solved the issue.
```
export no_proxy=$no_proxy,$(minikube ip)
```
Upvotes: 2 <issue_comment>username_2: I also had same issue, and for me it was happening because minikube VM was short of memory that was allocated.
Increasing Minikube RAM or deleting some exiting deployment, should resolve this issue.
To increase RAM configured for Minikube VM, use below command:
```
minikube config set memory 4096
```
After this, there would be need for minikube to stop, delete, and start using below commands.
```
minikube stop
minikube delete
minikube start
```
Upvotes: 1 <issue_comment>username_3: For Windows users change the visualization. for me this worked:
```
minikube start --vm-driver=docker
```
Upvotes: 0
|
2018/03/16
| 449 | 1,450 |
<issue_start>username_0: I am trying to make an angular HttpClient get request but am having trouble with doing anything with the data.
I am using an api which is returning data in the form
```
(10)[{...}, {...}, {...}, ...]
[
0:{
row: Array(4)
0: "someid"
1: "somename"
2: "someaddress"
3: "somepostcode"
},
1:{
row: Array(4)
0: "someid"
1: "somename"
2: "someaddress"
3: "somepostcode"
},
2:{
row: Array(4)
0: "someid"
1: "somename"
2: "someaddress"
3: "somepostcode"
},
...
]
```
I just want to be able to create a list of objects with "someaddress" and "someid" fields from the get data. I have looked at various tutorials but keep getting undefined elements. How could you get the data from this.<issue_comment>username_1: If you want to type your HTTP responses, use the generic type :
```
interface MyInterface {
row: string[];
}
return this.http.get('URL');
```
Upvotes: 0 <issue_comment>username_2: You can do like this:
```js
const data = [ // I have reformatted your returned data
{ row: ["someId", "somename", "someaddress"] },
{ row: ["someId", "somename", "someaddress"] },
{ row: ["someId", "somename", "someaddress"] },
{ row: ["someId", "somename", "someaddress"] }
];
const newData = data.map(item => ({ id: item.row[0], address: item.row[2] }));
console.log(newData);
```
Hope it helps! :)
Upvotes: 3 [selected_answer]
|
2018/03/16
| 456 | 1,647 |
<issue_start>username_0: I want to install Nginx, but the port 80 has been taken up by Apache2. I stop it by:
```
$ sudo kill -9 my-apache-pid
$ sudo service apache2 stop
$ sudo /etc/init.d/apache2 stop
[ ok ] Stopping apache2 (via systemctl): apache2.service.
```
and I can install Nginx. I use`sudo systemctl status nginx`.
It shows working well and Apache2 seems inactive. But when I enter my IP address in the browser, it still shows Apache2 hello-page. Why?<issue_comment>username_1: I had the same problem. To my wonder clearing history and cookies of the browser worked.
Upvotes: 2 <issue_comment>username_2: Apache and Nginx home pages located in this directory:
```
/var/www/html
```
But there is a small point, and that is that each of these two apps that were installed earlier takes up the `index.html` file, and when you enter the address of `localhost` in the browser, that file actually opens.
As a result, all you have to do is go into this directory and see what the name of Nginx home file.you must do this in your terminal:
```
ls -l /var/www/html
```
that show `index.nginx-debian.html` name for Nginx html file, So if you search for this address in your browser:
```
localhost/index.nginx-debian.html
```
you can see the home page of Nginx.
**all you need for show Nginx home page when search `localhost` is change then name of those two files.**
Upvotes: 1 <issue_comment>username_3: Even after you remove apache2 completely, you will still have its "default site" files sitting in /var/www/
Run the following command and refresh the page.
```
mv /var/www/html/index.html index.html_bkp
```
Upvotes: 0
|
2018/03/16
| 339 | 1,370 |
<issue_start>username_0: I want to store url of user's referral website after sucessfull registration. In this case I can't use `request.referer` because user can visit few pages on my website before registration. But I need previous website url, for example `http://google.com` or `http://facebook.com/somepage_id` or whatever. I know that Google Analytics or Intercome can collect this data but I want something simple. Preferably without external APIs or libraries if this possible.<issue_comment>username_1: When user reach your site create `UserSession` which will store `request.referer` also it has `user_id`. And during registration bind `UserSession` to created `User` by filling `user_id`. In this case you will get site from which user came + you can get some additional info from `UserSession`(user-agent, date, visited pages and etc.)
Upvotes: 1 <issue_comment>username_2: There is ruby gem called '**ahoy**', which can be used for this.
When someone visits your website, Ahoy creates a visit with lots of useful information.
* traffic source - referrer, referring domain, landing page, search
keyword
* location - country, region, and city
* technology - browser, OS, and device type
* utm parameters - source, medium, term, content, campaign
Please find the link for more information,
[Ahoy](https://github.com/ankane/ahoy)
Upvotes: 3 [selected_answer]
|
2018/03/16
| 350 | 1,248 |
<issue_start>username_0: My code is below :
```
int main(){
string s = "abcd";
int length_of_string = s.size();
cout<
```
As far as I know, every string is terminated with a NULL character. In my code I declare a string with length of 4. Then I print the length\_of\_string which gives a value of 4. Then I modify it and add two characters, 'e' at index 4 and 'f' at index of 5. Now my string has 6 characters. But when I read its length again it shows me that the length is 4, but my string length is 6.
How does s.size() function is work in this case. Is it not the count until NULL character?<issue_comment>username_1: The behaviour of your program is *undefined*.
The length of a `std::string` is returned by `size()`.
Although you are allowed to use `[]` to modify characters in the string prior to the index `size()`, you are not allowed to modify characters *on or after* that.
Reference: <http://en.cppreference.com/w/cpp/string/basic_string/operator_at>
Upvotes: 2 <issue_comment>username_2: If you need to push a character at the end of the string you should use `std::string::push_back` function as:
```
int main(){
string s = "abcd";
int length_of_string = s.size();
cout<
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 268 | 993 |
<issue_start>username_0: We are using Magento 1.9 for our application. Here is my sample code
```
$customer_collection= Mage::getModel("customer/customer")->load($data['customer_id']);
foreach ($data['data'] as $key => $customer) {
$customer_collection->setData($customer['attribute_name'] , $customer['attribute_value']);
}
$customer_collection->save(); //finally saving the data
```
Above code is working for all the fields except date field. Issue is when we send multiple data including date fields, other fields are getting updated but date field is not getting updated. Can anyone help me to solve this issue?<issue_comment>username_1: For date field update try to use
```
$object->setData($attributeCode, Varien_Date::now());
```
Upvotes: 0 <issue_comment>username_2: As [@mladen-ilić](https://stackoverflow.com/users/5714648/mladen-ili%C4%87) Suggested,
I did **Flush Cache Storage** and tried again to post the data. It works like a charm.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 455 | 1,371 |
<issue_start>username_0: **My Modal :**
```
{
"name":{type:String,required:true},
"category":{type:mongoose.Schema.Types.ObjectId,ref:"Category"}
}
```
I have a document created using this modal and the document looks like:
```
{
"_id":ObjectId("5dsfkjh2r74dsjdhf3r4f"),
"name":"demo 1",
"category":ObjectId("5ae9dlkj32nds6n37cj23")
}
```
If I try to change the category field, for eg:
* `document.category = ''`
* `document.category = null`
* `document.category = undefined`
I'm getting the following error:
>
> Cast toObjectId failed ...
>
>
>
I need to unset the `"category"` field to `null` or `empty` or even delete it. **How to do that?**<issue_comment>username_1: `ObjectId` should be a 24 length string which I guess your `id` is not. That's why mongo cannot convert it into valid `ObjectId`.
One solution would be to generate 24 character valid `ObjectId` for category field.
Other would be to set `category` as a simple `String` instead of `ObjectId`
Upvotes: 0 <issue_comment>username_2: Try `unsetting` the document
```
document.update({_id: "5dsfkjh2r74dsjdhf3r4f"}, {$unset: {category: 1 }});
```
or
```
document.update({_id: ObjectId("5dsfkjh2r74dsjdhf3r4f")}, {$unset: {category: 1 }});
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: Use Below line
document.category = new MongoDB\BSON\ObjectID('');
Upvotes: 0
|
2018/03/16
| 485 | 1,328 |
<issue_start>username_0: ```
replicatee :: [a] -> Int -> [a]
replicatee [] _ = []
replicatee xs 0 = []
replicatee (x:xs) n = x:replicatee (x:xs) (n-1): replicatee xs n
```
So this is my code for replicating a an element in a list n times, the compler keeps showing an error :
```
Couldnt match type 'a'with [a], I'm seriously confused, please help out.
```
Edit : what i want my function to do is this:
replicatee [1,2,3,4] 2
[1,1,2,2,3,3,4,4]<issue_comment>username_1: I might have misunderstood your intention, but maybe you meant something like this:
```
replicatee :: a -> Int -> [a]
replicatee _ 0 = []
replicatee x n = x:replicatee x (n-1)
```
Upvotes: 1 <issue_comment>username_2: ```
replicatee :: [a] -> Int -> [a]
replicatee [] _ = []
replicatee xs 0 = []
replicatee (x:xs) n = x:replicatee (x:xs) (n-1): replicatee xs n
```
The problem is that `replicatee` returns a value of type `[a]`, but you try to add that to *another* list of type `[a]` using `(:) :: a -> [a] -> [a]`. From a type-checking perspective, you need to use `(++)`, not `(:)`:
```
replicatee xs'@(x:xs) n = x : (replicatee xs' (n-1) ++ replicatee xs n)
```
Whether it *does* what you intended is another matter. Based on your description, [username_1 provides the right answer](https://stackoverflow.com/a/49320143/1126841).
Upvotes: 0
|
2018/03/16
| 1,169 | 3,375 |
<issue_start>username_0: I am trying to develop toggle buttons using plain `CSS`. My toggle button should look like the below image.
[](https://i.stack.imgur.com/J4gt1.png)
Here is the snippet of the code that I created.
```css
.btn {
display: inline-block;
margin-bottom: 0;
text-align: center;
cursor: pointer;
background-image: none;
border: 1px solid transparent;
white-space: nowrap;
padding: 3px 16px;
font-family: ABBvoice;
font-size: 13px;
font-weight: 500;
border-radius: 0;
height: 30px;
padding-bottom: 7px;
}
.btn-default-toggle-ghost,
.btn-default-toggle-ghost:focus {
background: transparent;
border: 1px solid rgba(160, 160, 160, 0.6);
color: #464646;
outline: none;
}
```
```html
Option 1
Option 2
```
Above code displays toggle button as below.
[](https://i.stack.imgur.com/eaXeZ.png)
Can someone help me to correct the css in the above code? Any help would be appreciated. Thank you.<issue_comment>username_1: You can check out `for` attribute for `label`. Notice that I placed `label` right after `input[type=radio]`
```css
.btn {
display: inline-block;
margin-bottom: 0;
text-align: center;
cursor: pointer;
background-image: none;
border: 1px solid transparent;
white-space: nowrap;
padding: 3px 16px;
font-family: ABBvoice;
font-size: 13px;
font-weight: 500;
border-radius: 0;
height: 30px;
padding-bottom: 7px;
}
.btn-default-toggle-ghost, .btn-default-toggle-ghost:focus {
background: transparent;
border: 1px solid rgba(160,160,160, 0.6);
color: #464646;
outline: none;
}
input[type=radio]{
/* comment this out to check if radio input is checked */
display: none;
}
input[type=radio]:checked+label{
background-color: blue;
color: white;
}
```
```html
Option 1
Option 2
```
Upvotes: 1 <issue_comment>username_2: This will work for you:
`Input` can't be styled so, it's better to hide them and style there `label` as per your need.
```css
.btn {
display: inline-block;
margin-bottom: 0;
text-align: center;
cursor: pointer;
background-image: none;
border: 1px solid transparent;
white-space: nowrap;
padding: 3px 16px;
font-family: ABBvoice;
font-size: 13px;
font-weight: 500;
border-radius: 0;
height: 30px;
padding-bottom: 7px;
}
.btn-default-toggle-ghost,
.btn-default-toggle-ghost:focus {
background: transparent;
border: 1px solid rgba(160, 160, 160, 0.6);
color: #464646;
outline: none;
text-align: center;
font-size: 16px;
line-height: 30px;
position: relative;
float: left;
}
.btn-group [type="radio"] {
display: none;
}
[type="radio"]:checked+.btn-default-toggle-ghost {
background: #DEDEDE;
}
[type="radio"]:checked+.btn-default-toggle-ghost:after {
content: '';
position: absolute;
top: 0px;
height: 3px;
background: #0093F6;
left: 0px;
right: 0px;
}
.btn-default-toggle-ghost+[type="radio"]+.btn-default-toggle-ghost{
border-left:0px;/*for removing the extra border between the buttons*/
}
```
```html
Option 1
Option 2
```
I hope this works fine for you.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 204 | 773 |
<issue_start>username_0: SQL Server: I am looking for a way to force at database level a date column to be last day of the month (note: **not** the last day of **current** month). Is there a way to alter the table to implement this constraint?
Thank you<issue_comment>username_1: You could implement the check constraint via `eomonth()` function to validate the date (i.e. end-of-month)
```
ALTER table t
ADD CONSTRAINT CHK_Date CHECK( = eomonth())
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can, however, do any of the following:
* Do your validation in a trigger. Should be after insert and update.
* Create another table with all possible end of month dates and use a foreign key for that column.
Yogesh's answer is the correct one.
Upvotes: 1
|
2018/03/16
| 1,750 | 4,132 |
<issue_start>username_0: I'm stuck on a simple issue:
I get through `urllib` a JSON app list which looks like this :
```
"completedapps" : [ {
"starttime" : 1520863179923,
"id" : "app-20180312145939-0183",
"name" : "IE_Traitement_3",
"cores" : 1,
"user" : "root",
"memoryperslave" : 1024,
"submitdate" : "Mon Mar 12 14:59:39 CET 2018",
"state" : "FINISHED",
"duration" : 212967
}, {
"starttime" : 1520863398147,
"id" : "app-20180312150318-0186",
"name" : "IE_Traitement_3",
"cores" : 1,
"user" : "root",
"memoryperslave" : 1024,
"submitdate" : "Mon Mar 12 15:03:18 CET 2018",
"state" : "FINISHED",
"duration" : 6321
}, {
"starttime" : 1520863387941,
"id" : "app-20180312150307-0185",
"name" : "IE_Traitement_0A",
"cores" : 1,
"user" : "root",
"memoryperslave" : 1024,
"submitdate" : "Mon Mar 12 15:03:07 CET 2018",
"state" : "FINISHED",
"duration" : 149536
}, { ... }]
```
I would like to get the most recent element for app named "IE\_Traitement\_OA", so I begin with filtering my JSON like this :
```
[app for app in parsedjson['completedapps'] if app['name'] == "IE_Traitement_OA"]
```
But I'm stuck now, I have no idea about how could I get the most recent "app" ? I think I have to use the `starttime` or `submitdate` field but I don't know how to deal with that. Could you help me?<issue_comment>username_1: If you will use `starttime` you can use the `max` function like that:
```
data = [{
"starttime" : 1520863398147,
"id" : "app-20180312150318-0186",
"name" : "IE_Traitement_3",
"cores" : 1,
"user" : "root",
"memoryperslave" : 1024,
"submitdate" : "Mon Mar 12 15:03:18 CET 2018",
"state" : "FINISHED",
"duration" : 6321
}, {
"starttime" : 1520863387941,
"id" : "app-20180312150307-0185",
"name" : "IE_Traitement_0A",
"cores" : 1,
"user" : "root",
"memoryperslave" : 1024,
"submitdate" : "Mon Mar 12 15:03:07 CET 2018",
"state" : "FINISHED",
"duration" : 149536
}]
most_recent = max(data,key=lambda e: e['starttime'])
print(most_recent)
```
Now, if you want use the `submitdate` you need to convert first
At this link there some examples of conversion: [Converting string into datetime](https://stackoverflow.com/questions/466345/converting-string-into-datetime)
Good look!
Upvotes: 0 <issue_comment>username_2: you can filter using the following:
```
a = list(filter(lambda x: x['name'] == 'IE_Traitement_0A', data['completedapps']))
```
`a` will contain a list of all dict that match your filter and then you can sort the the list for the latest one -- using whatever key to sort it by
```
sorted_a = sorted(a, key=lambda k: k['starttime'])
```
if you want only one then select the first element of `sorted_a` assuming it's not empty.
EDIT: use min instead of sorted thanks for the tip @VPfB
```
min_a = min(a, key=lambda k: k['starttime'])
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: ```
req_json = """{"completedapps" : [ {
"starttime" : 1520863179923,
"id" : "app-20180312145939-0183",
"name" : "IE_Traitement_3",
"cores" : 1,
"user" : "root",
"memoryperslave" : 1024,
"submitdate" : "Mon Mar 12 14:59:39 CET 2018",
"state" : "FINISHED",
"duration" : 212967
}, {
"starttime" : 1520863398147,
"id" : "app-20180312150318-0186",
"name" : "IE_Traitement_3",
"cores" : 1,
"user" : "root",
"memoryperslave" : 1024,
"submitdate" : "Mon Mar 12 15:03:18 CET 2018",
"state" : "FINISHED",
"duration" : 6321
}, {
"starttime" : 1520863387941,
"id" : "app-20180312150307-0185",
"name" : "IE_Traitement_0A",
"cores" : 1,
"user" : "root",
"memoryperslave" : 1024,
"submitdate" : "Mon Mar 12 15:03:07 CET 2018",
"state" : "FINISHED",
"duration" : 149536
} ]}"""
import json
data = json.loads(req_json)
print(sorted(data['completedapps'], key=lambda x: x['starttime'])[0]['id'])
out:
app-20180312145939-0183
```
Explanation: first get list of dict and sort then by timestamp.
Upvotes: 0
|
2018/03/16
| 1,303 | 3,226 |
<issue_start>username_0: Now I'm doing:
```
sess := mongodb.DB("mybase").C("mycollection")
var users []struct {
Username string `bson:"username"`
}
err = sess.Find(nil).Select(bson.M{"username": 1, "_id": 0}).All(&users)
if err != nil {
fmt.Println(err)
}
var myUsers []string
for _, user := range users{
myUsers = append(myUsers, user.Username)
}
```
Is there a more effective way to get slice with usernames from Find (or another search function) directly, without struct and range loop?<issue_comment>username_1: If you will use `starttime` you can use the `max` function like that:
```
data = [{
"starttime" : 1520863398147,
"id" : "app-20180312150318-0186",
"name" : "IE_Traitement_3",
"cores" : 1,
"user" : "root",
"memoryperslave" : 1024,
"submitdate" : "Mon Mar 12 15:03:18 CET 2018",
"state" : "FINISHED",
"duration" : 6321
}, {
"starttime" : 1520863387941,
"id" : "app-20180312150307-0185",
"name" : "IE_Traitement_0A",
"cores" : 1,
"user" : "root",
"memoryperslave" : 1024,
"submitdate" : "Mon Mar 12 15:03:07 CET 2018",
"state" : "FINISHED",
"duration" : 149536
}]
most_recent = max(data,key=lambda e: e['starttime'])
print(most_recent)
```
Now, if you want use the `submitdate` you need to convert first
At this link there some examples of conversion: [Converting string into datetime](https://stackoverflow.com/questions/466345/converting-string-into-datetime)
Good look!
Upvotes: 0 <issue_comment>username_2: you can filter using the following:
```
a = list(filter(lambda x: x['name'] == 'IE_Traitement_0A', data['completedapps']))
```
`a` will contain a list of all dict that match your filter and then you can sort the the list for the latest one -- using whatever key to sort it by
```
sorted_a = sorted(a, key=lambda k: k['starttime'])
```
if you want only one then select the first element of `sorted_a` assuming it's not empty.
EDIT: use min instead of sorted thanks for the tip @VPfB
```
min_a = min(a, key=lambda k: k['starttime'])
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: ```
req_json = """{"completedapps" : [ {
"starttime" : 1520863179923,
"id" : "app-20180312145939-0183",
"name" : "IE_Traitement_3",
"cores" : 1,
"user" : "root",
"memoryperslave" : 1024,
"submitdate" : "Mon Mar 12 14:59:39 CET 2018",
"state" : "FINISHED",
"duration" : 212967
}, {
"starttime" : 1520863398147,
"id" : "app-20180312150318-0186",
"name" : "IE_Traitement_3",
"cores" : 1,
"user" : "root",
"memoryperslave" : 1024,
"submitdate" : "Mon Mar 12 15:03:18 CET 2018",
"state" : "FINISHED",
"duration" : 6321
}, {
"starttime" : 1520863387941,
"id" : "app-20180312150307-0185",
"name" : "IE_Traitement_0A",
"cores" : 1,
"user" : "root",
"memoryperslave" : 1024,
"submitdate" : "Mon Mar 12 15:03:07 CET 2018",
"state" : "FINISHED",
"duration" : 149536
} ]}"""
import json
data = json.loads(req_json)
print(sorted(data['completedapps'], key=lambda x: x['starttime'])[0]['id'])
out:
app-20180312145939-0183
```
Explanation: first get list of dict and sort then by timestamp.
Upvotes: 0
|
2018/03/16
| 556 | 2,159 |
<issue_start>username_0: ```
function mapStateToProps(state) {
let returnObject = {};
if (state && state.form) {
if (
state.form.someFormName &&
state.form.someFormName.values &&
state.form.someFormName.values.fieldNameX &&
state.form.someFormName.values.fieldNameX === "1"
) {
state.form.someFormName.values.fieldNameB =
state.form.someFormName.values.fieldNameA;
state.form.someFormName.values.fieldNameC =
state.form.someFormName.values.fieldNameD;
state.form.someFormName.values.fieldNameF =
state.form.someFormName.values.fieldNameE;
}
}
return returnObject;
}
```
This is Working Fine on selecting form field i am just copying redux form value to another field,How to do in Efficient Way?This is the Correct way to copy one form field into another ?<issue_comment>username_1: You can use `formValueSelector` to connect to your form value and than dispatch `change` action creator to update any field with your value.
Here is the example with textboxes(you can update it to use datePicker):
```
import {change, formValueSelector} from 'redux-form';
let FormName = (props) => {
const {dispatch, handleSubmit, firstValue} = props;
return
First Value
Copy value
{
if (e.target.checked) {
dispatch(change('formName', 'secondValue', firstValue));
}
}}
/>
Second Value
Submit
};
const selector = formValueSelector('formName');
connect(
state => ({
firstValue: selector(state, 'firstValue'),
})
)(FormName);
```
Upvotes: 1 <issue_comment>username_2: This is not the Preferred method to change the state like this (You are mutating the state) Redux 2 Principle,Use this instead:
If you are using radio/checkbox, then onPress of that radio/checkbox you can copy all the field,then dispatch onChange action.
For Eg:
```
someMethod() {
let formValues = Object.assign({}, this.props.formValues);
this.props.dispatch(
change("SomeForm", "fieldWantToCopy", value)
);
}
```
Upvotes: 1 [selected_answer]
|
2018/03/16
| 538 | 2,120 |
<issue_start>username_0: I am developing a one page website and I would like to load in each section when the window scrolls to that specific section. Now I know you can lazyload images but I want to lazy load the entire section. The only way I think it would be possible is if I put my html code into jQuery then load it in when the scroll position is reached. But I would prefer not to do this.
This is a wordpress website and I am loading each page through into the homepage using
```
php require_once(''); ?
```
so my page is layed out something like this
```
php get_header(); ?
php require_once('section_one.php'); ?
php require_once('section_two.php'); ?
php get_footer(); ?
```
So could I use php to only load these sections in when the scroll position is reached or is there a better way with jQuery? Also I want web crawlers etc to still be able to see my whole page. So if jQuery is disabled I want the full page to show. Any guidance or tutorials on this would be very helpful thanks<issue_comment>username_1: Create a method in controller that will render a sub-view based on the section number and return you the HTML. Or in your case create a file that will accept a GET request with section number, and render the output of needed section file as its done in most PHP frameworks (see below). That way you can make AJAX request when scrolling position is of necessary value, and insert returned HTML into the page.
```
php
$section_number = $_GET['section'];
ob_start();
if(file_exists(__DIR__ . 'section_' . $section_number . '.php')) {
include(__DIR__ . 'section_' . $section_number . '.php');
$var=ob_get_contents();
ob_end_clean();
echo $var;
}
echo '';
</code
```
[Render PHP file into string variable](https://stackoverflow.com/questions/34593130/render-php-file-into-string-variable)
Upvotes: 3 [selected_answer]<issue_comment>username_2: May I suggest that you load the content and hide it with CSS instead? And then make a scroll spy solution to display the content when the section enters viewport? Why force someone to wait while the contents loads?
Upvotes: 0
|
2018/03/16
| 473 | 1,286 |
<issue_start>username_0: Perl : how to sum values in a line.
my Data looks like below.
```
"A",1.2,-1.5,4.2,1.4,
"B",2.6,-.50,-1.6,0.3,-1.3
```
Expected Output:
```
5.3
-0.5
```<issue_comment>username_1: This should be enough to get you started:
```
#!/usr/bin/perl
use strict;
use warnings;
use feature 'say';
use List::Util 'sum';
while () {
chomp(my @row = split /,/);
say sum @row[1 .. $#row];
}
\_\_DATA\_\_
"A",1.2,-1.5,4.2,1.4
"B",2.6,-.50,-1.6,0.3,-1.3
```
(I wouldn't usually give code to answer a question that shows absolutely no effort by the OP. But there's a danger that you'll follow the bad advice in at least one of the other answers.)
**Update:** In the case where you might have extra commas in your data (as in the sample given) then you would need to filter our none-numbers before passing them to `sum()`. The easiest approach is probably to use `looks_like_number()` from the Scalar::Util module.
```
use Scalar::Util 'looks_like_number';
```
And then, in the `sum()` line...
```
say sum grep { looks_like_number($_) } @row[1 .. $#row];
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you're interested in a geeky way of summing, look at this:
```
perl -lpe "()=m{,(?{$_+=$'})}g" in.txt
```
---
```
5.3
-0.5
```
Upvotes: -1
|
2018/03/16
| 560 | 1,920 |
<issue_start>username_0: I am trying to make a cell that will have two UILabels at the same line.
Like this:
```
User Name: blablabla
```
Where `User Name` is the first UILabel and `blablabla` is the second UILabel.
I want the first UILabel to be wrap content and the second one to have its content extended until the super view's trailing.
I tried to look for an answer to my question around StackOverflow, but I could not find one. Does someone know how can I achieve that?
---
I want something just like this:
```
```
or like this
```
```<issue_comment>username_1: You should try the following solution.
For first label.
1. Add top, left, bottom, width constraint.
2. Now set number of lines to 0 and `Autoshrink` to Minimum font size shown as below. You should set your required minimum font size.
[](https://i.stack.imgur.com/d5Gpv.png)
For second label
1. Add top, left, bottom, right.
2. Now set number of lines to 0 and `Autoshrink` to Minimum font size shown as below. You should set your required minimum font size.
These changes will make first label's font autoshirnk upto your minimum font size. And will also make the second label to extent the content.
Upvotes: 0 <issue_comment>username_2: * you can do this by :
yourLabel.lineBreakMode = NSLineBreakMode.byWordWrapping
Upvotes: 1 <issue_comment>username_3: Try to change content compression resistance priority
Look at these snapshots:
Labels with default `content compression resistance priority`
[](https://i.stack.imgur.com/taqIH.png)
I changed `content compression resistance priority` for label `blablabla blablabla`, from `750` to `749`.
Result is:
[](https://i.stack.imgur.com/9B0bZ.png)
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,148 | 3,200 |
<issue_start>username_0: Using the following code, I am trying to filter data on the date of the event (=.evSpeeldatum) and the discount type (=.tiPrijstype).
```
SELECT
*
FROM
tickets
JOIN
evenementen ON tickets.fk_tiEvenementID = evenementen.idEvenement
WHERE
evenementen.evSpeeldatum >= '2018-01-24'
AND tickets.tiPrijstype = 2
OR 3
OR 4
OR 5
OR 6
OR 66
OR 67;
```
However, when I run this code both the outcome is not filtered on the date of the event nor the discount types (2 OR 3 OR 4 OR 5 OR 6 OR 66 OR 67)<issue_comment>username_1: Here's an example of a valid query:
```
SELECT *
FROM tickets t
JOIN evenementen e
ON e.idEvenement = t.fk_tiEvenementID
WHERE e.evSpeeldatum >= '2018-01-24'
AND (t.tiPrijstype = 2 OR t.tiPrijstype = 3);
```
...which can also be rewritten as...
```
SELECT *
FROM tickets t
JOIN evenementen e
ON e.idEvenement = t.fk_tiEvenementID
WHERE e.evSpeeldatum >= '2018-01-24'
AND t.tiPrijstype IN(2,3);
```
Upvotes: 0 <issue_comment>username_2: If you have records which will fall for this filter and if date is being saved in this format only, then following query will work,
```
SELECT * FROM tickets JOIN evenementen ON tickets.fk_tiEvenementID = evenementen.idEvenement WHERE
evenementen.evSpeeldatum >= '2018-01-24'
AND tickets.tiPrijstype IN (2,3,4,5,6,66,67);
```
Upvotes: 0 <issue_comment>username_3: use `IN` like
```
SELECT
*
FROM
tickets
JOIN
evenementen ON tickets.fk_tiEvenementID = evenementen.idEvenement
WHERE
evenementen.evSpeeldatum >= '2018-01-24'
AND tickets.tiPrijstype IN(2,3,4,5,6,66,67);
```
Upvotes: 3 [selected_answer]<issue_comment>username_4: The `WHERE` clause is supposed to look at boolean expressions (comparisions mainly). These you can combine with `AND` and `OR`. For istance:
```
WHERE tickets.tiPrijstype = 2 OR tickets.tiPrijstype = 3
```
You however have `OR 3`. MySQL expects some boolean expression, so it surmises the 3 to represent a boolean value and converts the 3 silently to `true` (it converts 0 to `false` and all other values to `true`). So your
```
WHERE tickets.tiPrijstype = 2 OR 3
```
converts to
```
WHERE tickets.tiPrijstype = 2 OR true
```
which is true for every record.
Also be aware of operator precedence when mixing `AND` and `OR`. `AND` has precedence over `OR`. So
```
WHERE evenementen.evSpeeldatum >= '2018-01-24'
AND tickets.tiPrijstype = 2
OR tickets.tiPrijstype = 3
```
means
```
WHERE (evenementen.evSpeeldatum >= '2018-01-24' AND tickets.tiPrijstype = 2)
OR tickets.tiPrijstype = 3
```
which again is not what you want. Use parentheses to get this straight:
```
WHERE evenementen.evSpeeldatum >= '2018-01-24'
AND (tickets.tiPrijstype = 2 OR tickets.tiPrijstype = 3)
```
As has been shown, when checking for a value in a list, you'd use `IN` anyway, so the mentioned problem doesn't arise:
```
WHERE evenementen.evSpeeldatum >= '2018-01-24'
AND tickets.tiPrijstype IN (2,3)
```
The whole query:
```
SELECT *
FROM tickets t
JOIN evenementen e ON t.fk_tiEvenementID = e.idEvenement
WHERE e.evSpeeldatum >= date '2018-01-24'
AND t.tiPrijstype IN (2,3,4,5,6,66,67);
```
Upvotes: 0
|
2018/03/16
| 291 | 1,069 |
<issue_start>username_0: In my project I have to change the endian-ness of the data types like int, float, short etc. I thought that the best way to do is to access the elements of the struct then change the endian-ness of them (if they are bigger than 1 byte). The struct is very long, it is necessary to do this in an automatical manner.
ex struct:
```
struct
{
int a;
short b;
char c;
int d;
int e;
float f;
char g;
int h
}
```<issue_comment>username_1: You can't iterate class/struct members dynamically without reflection. You should either try a different approach, or consider using reflection.
---
Edit:
You could try aligning your struct data to 4-byte boundaries with:
```
[StructLayout(LayoutKind.Sequential, Pack = 4)]
struct ExampleStruct{ ... }
```
Then you would access your struct data through an `unsafe` raw pointer and ignore preceding 0-value bytes for each 4-byte chunk.
Upvotes: 2 <issue_comment>username_2: Make your .cs file copied to the output directory (Debug). Now you can do anything with your text! Parse it !
Cheers!
Upvotes: 1
|
2018/03/16
| 1,173 | 3,318 |
<issue_start>username_0: I have this error when opening my netcdf file.
The code was working before.
**How do I fix this ?**
>
> Traceback (most recent call last):
>
>
> File "", line 1, in
> ...
>
>
> File "file.py", line 71, in gather\_vgt
> return xr.open\_dataset(filename)
>
>
> File "/.../lib/python3.6/site-packages/xarray/backends/api.py", line
> 286, in open\_dataset
> autoclose=autoclose)
>
>
> File "/.../lib/python3.6/site-packages/xarray/backends/netCDF4\_.py",
> line 275, in open
> ds = opener()
>
>
> File "/.../lib/python3.6/site-packages/xarray/backends/netCDF4\_.py",
> line 199, in \_open\_netcdf4\_group
> ds = nc4.Dataset(filename, mode=mode, \*\*kwargs)
>
>
> File "netCDF4/\_netCDF4.pyx", line 2015, in
> netCDF4.\_netCDF4.Dataset.**init**
>
>
> File "netCDF4/\_netCDF4.pyx", line 1636, in
> netCDF4.\_netCDF4.\_ensure\_nc\_success
>
>
> OSError: [Errno -101] NetCDF: HDF error: b'file.nc'
>
>
>
When I try to open the same netcdf file with h5py I get this error :
>
> OSError: Unable to open file (file locking disabled on this file
> system (use HDF5\_USE\_FILE\_LOCKING environment variable to override),
> errno = 38, error message = '...')
>
>
><issue_comment>username_1: You must be in this situation :
* your HDF5 library has been updated (1.10.1) (netcdf uses HDF5 under the hood)
* your file system does not support the file locking that the HDF5 library uses.
In order to read your hdf5 or netcdf files, you need set this [environment variable](https://en.wikipedia.org/wiki/Environment_variable) :
```
HDF5_USE_FILE_LOCKING=FALSE
```
---
For references, this was introduced in [HDF5 version 1.10.1](https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.10/hdf5-1.10.1/src/hdf5-1.10.1-RELEASE.txt),
>
> Added a mechanism for disabling the SWMR file locking scheme.
>
>
> The file locking calls used in HDF5 1.10.0 (including patch1)
>
> will fail when the underlying file system does not support file
>
> locking or where locks have been disabled. To disable all file
>
> locking operations, an environment variable named
>
> HDF5\_USE\_FILE\_LOCKING can be set to the five-character string
>
> 'FALSE'. This does not fundamentally change HDF5 library
>
> operation (aside from initial file open/create, SWMR is lock-free),
>
> but users will have to be more careful about opening files
>
> to avoid problematic access patterns (i.e.: multiple writers) >that the file locking was designed to prevent.
>
>
> Additionally, the error message that is emitted when file lock
>
> operations set errno to ENOSYS (typical when file locking has been
>
> disabled) has been updated to describe the problem and potential
>
> resolution better.
>
>
> (DER, 2016/10/26, HDFFV-9918)
>
>
>
Upvotes: 4 <issue_comment>username_2: In my case, the solution as suggested by @username_1 did not work. I found another solution, which suggested that the order in which `h5py` and `netCDF4` are imported matters (see [here](https://www.bountysource.com/issues/44668444-oserror-netcdf-hdf-error-if-netcdf4-and-h5py-installed-together)).
And, indeed, the following works for me:
```
from netCDF4 import Dataset
import h5py
```
Switching the order results in the error as described by OP.
Upvotes: 2
|
2018/03/16
| 619 | 2,071 |
<issue_start>username_0: I need to have div border responsive. However, as you can see `.buttonsDiv` needs to be at the bottom and wrapper border needs to be stretched underneath `.buttonsDiv`. But when I use this code buttons are at the bottom but border stays at the top. I can't use margin because content div contains elements that are shown/hidden and the page needs to be fixed `aka` disabled scrolling.
**html**
```css
.wrapper {
border: 1px solid black;
}
.buttonsDiv {
position: fixed;
bottom: 10px;
}
```
```html
Content
Butons
```<issue_comment>username_1: Its hard to understand what you are after. Do you mean something like this?
```css
html, body{
margin: 0;
padding: 0;
}
.wrapper{
height: 100vh;
width: 100vw;
padding: 10px;
box-sizing: border-box;
}
.inner_wrap{
position:relative;
width: 100%;
height: 100%;
border: 1px solid black;
box-sizing: border-box;
}
.buttonsDiv{
position: absolute;
bottom: 10px;
}
```
```html
Content
Butons
```
Upvotes: 2 <issue_comment>username_2: To add to the previous answer:
```css
.wrapper {
border: 1px solid black;
height: 100vh;
}
.buttonsDiv {
position: absolute;
bottom: 1px;
}
```
```html
Content
Butons
```
The wrapper doesn't need an position: relative, position static will do fine.
With the position absolute of the button div you place the element relative to its **parent element**. Therefore if we put `.buttonsDiv` to bottom:1px it will stick to the bottom of the element.
Upvotes: 2 <issue_comment>username_3: Put **position: absolute;** to the parent and define top, bottom, left, right as 0;
*PS: This solution will not add scroll bar which appears if you put height 100vh*
```css
.wrapper {
border: 1px solid black;
position: absolute;
bottom: 0;
top: 0;
right: 0;
left: 0
}
.buttonsDiv {
position: fixed;
bottom: 10px;
}
```
```html
Content
Butons
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 703 | 2,305 |
<issue_start>username_0: I am using the following code in a long script many times:
```
this.click("");
```
But there is one page that doesn't work, and I don't understand why.
The HTML is:
```
Choose
```
So I am using:
```
casper.waitUntilVisible('#products_screen',
function success() {
casper.test.pass('Product List Show');
this.echo(this.getHTML('.actions-primary'));
//this.click("button#2.add-subscription.button.middle");
this.click("#2");
},
function fail() {
casper.test.fail('List of Products not showing');
},
50000);
```
I tried all possible selectors with no luck.
Also, if I try with Resurrectio in Chrome, it doesn't record the click.
Any workaround is welcome.
**SOLUTION UPDATE:**
```
this.click('[id="2"]');
```<issue_comment>username_1: Its hard to understand what you are after. Do you mean something like this?
```css
html, body{
margin: 0;
padding: 0;
}
.wrapper{
height: 100vh;
width: 100vw;
padding: 10px;
box-sizing: border-box;
}
.inner_wrap{
position:relative;
width: 100%;
height: 100%;
border: 1px solid black;
box-sizing: border-box;
}
.buttonsDiv{
position: absolute;
bottom: 10px;
}
```
```html
Content
Butons
```
Upvotes: 2 <issue_comment>username_2: To add to the previous answer:
```css
.wrapper {
border: 1px solid black;
height: 100vh;
}
.buttonsDiv {
position: absolute;
bottom: 1px;
}
```
```html
Content
Butons
```
The wrapper doesn't need an position: relative, position static will do fine.
With the position absolute of the button div you place the element relative to its **parent element**. Therefore if we put `.buttonsDiv` to bottom:1px it will stick to the bottom of the element.
Upvotes: 2 <issue_comment>username_3: Put **position: absolute;** to the parent and define top, bottom, left, right as 0;
*PS: This solution will not add scroll bar which appears if you put height 100vh*
```css
.wrapper {
border: 1px solid black;
position: absolute;
bottom: 0;
top: 0;
right: 0;
left: 0
}
.buttonsDiv {
position: fixed;
bottom: 10px;
}
```
```html
Content
Butons
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 2,854 | 8,631 |
<issue_start>username_0: My `data` frame contains 10,000,000 rows! After group by, ~ 9,000,000 sub-frames remain to loop through.
The code is:
```
data = read.csv('big.csv')
for id, new_df in data.groupby(level=0): # look at mini df and do some analysis
# some code for each of the small data frames
```
This is super inefficient, and the code has been running for 10+ hours now.
Is there a way to speed it up?
**Full Code:**
```
d = pd.DataFrame() # new df to populate
print 'Start of the loop'
for id, new_df in data.groupby(level=0):
c = [new_df.iloc[i:] for i in range(len(new_df.index))]
x = pd.concat(c, keys=new_df.index).reset_index(level=(2,3), drop=True).reset_index()
x = x.set_index(['level_0','level_1', x.groupby(['level_0','level_1']).cumcount()])
d = pd.concat([d, x])
```
**To get the data:**
```
data = pd.read_csv('https://raw.githubusercontent.com/skiler07/data/master/so_data.csv', index_col=0).set_index(['id','date'])
```
**Note:**
Most of id's will only have 1 date. This indicates only 1 visit. For id's with more visits, I would like to structure them in a 3d format e.g. store all of their visits in the 2nd dimension out of 3. The output is *(id, visits, features)*<issue_comment>username_1: Here is one way to speed that up. This adds the desired new rows in some code which processes the rows directly. This saves the overhead of constantly constructing small dataframes. Your sample of 100,000 rows runs in a couple of seconds on my machine. While your code with only 10,000 rows of your sample data takes > 100 seconds. This seems to represent a couple of orders of magnitude improvement.
### Code:
```
def make_3d(csv_filename):
def make_3d_lines(a_df):
a_df['depth'] = 0
depth = 0
prev = None
accum = []
for row in a_df.values.tolist():
row[0] = 0
key = row[1]
if key == prev:
depth += 1
accum.append(row)
else:
if depth == 0:
yield row
else:
depth = 0
to_emit = []
for i in range(len(accum)):
date = accum[i][2]
for j, r in enumerate(accum[i:]):
to_emit.append(list(r))
to_emit[-1][0] = j
to_emit[-1][2] = date
for r in to_emit[1:]:
yield r
accum = [row]
prev = key
df_data = pd.read_csv('big-data.csv')
df_data.columns = ['depth'] + list(df_data.columns)[1:]
new_df = pd.DataFrame(
make_3d_lines(df_data.sort_values('id date'.split())),
columns=df_data.columns
).astype(dtype=df_data.dtypes.to_dict())
return new_df.set_index('id date'.split())
```
### Test Code:
```
start_time = time.time()
df = make_3d('big-data.csv')
print(time.time() - start_time)
df = df.drop(columns=['feature%d' % i for i in range(3, 25)])
print(df[df['depth'] != 0].head(10))
```
### Results:
```
1.7390995025634766
depth feature0 feature1 feature2
id date
207555809644681 20180104 1 0.03125 0.038623 0.008130
247833985674646 20180106 1 0.03125 0.004378 0.004065
252945024181083 20180107 1 0.03125 0.062836 0.065041
20180107 2 0.00000 0.001870 0.008130
20180109 1 0.00000 0.001870 0.008130
329567241731951 20180117 1 0.00000 0.041952 0.004065
20180117 2 0.03125 0.003101 0.004065
20180117 3 0.00000 0.030780 0.004065
20180118 1 0.03125 0.003101 0.004065
20180118 2 0.00000 0.030780 0.004065
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I believe your approach for *feature engineering* could be done better, but I will stick to answering your question.
In Python, **iterating over a Dictionary is way faster than iterating over a DataFrame**
Here how I managed to process a huge pandas DataFrame (~100,000,000 rows):
```
# reset the Dataframe index to get level 0 back as a column in your dataset
df = data.reset_index() # the index will be (id, date)
# split the DataFrame based on id
# and store the splits as Dataframes in a dictionary using id as key
d = dict(tuple(df.groupby('id')))
# iterate over the Dictionary and process the values
for key, value in d.items():
pass # each value is a Dataframe
# concat the values and get the original (processed) Dataframe back
df2 = pd.concat(d.values(), ignore_index=True)
```
Upvotes: 2 <issue_comment>username_3: Modified @Stephen's code
```
def make_3d(dataset):
def make_3d_lines(a_df):
a_df['depth'] = 0 # sets all depth from (1 to n) to 0
depth = 1 # initiate from 1, so that the first loop is correct
prev = None
accum = [] # accumulates blocks of data belonging to given user
for row in a_df.values.tolist(): # for each row in our dataset
row[0] = 0 # NOT SURE
key = row[1] # this is the id of the row
if key == prev: # if this rows id matches previous row's id, append together
depth += 1
accum.append(row)
else: # else if this id is new, previous block is completed -> process it
if depth == 0: # previous id appeared only once -> get that row from accum
yield accum[0] # also remember that depth = 0
else: # process the block and emit each row
depth = 0
to_emit = [] # prepare to emit the list
for i in range(len(accum)): # for each unique day in the accumulated list
date = accum[i][2] # define date to be the first date it sees
for j, r in enumerate(accum[i:]):
to_emit.append(list(r))
to_emit[-1][0] = j # define the depth
to_emit[-1][2] = date # define the
for r in to_emit[0:]:
yield r
accum = [row]
prev = key
df_data = dataset.reset_index()
df_data.columns = ['depth'] + list(df_data.columns)[1:]
new_df = pd.DataFrame(
make_3d_lines(df_data.sort_values('id date'.split(), ascending=[True,False])),
columns=df_data.columns
).astype(dtype=df_data.dtypes.to_dict())
return new_df.set_index('id date'.split())
```
**Testing:**
```
t = pd.DataFrame(data={'id':[1,1,1,1,2,2,3,3,4,5], 'date':[20180311,20180310,20180210,20170505,20180312,20180311,20180312,20180311,20170501,20180304], 'feature':[10,20,45,1,14,15,20,20,13,11],'result':[1,1,0,0,0,0,1,0,1,1]})
t = t.reindex(columns=['id','date','feature','result'])
print t
id date feature result
0 1 20180311 10 1
1 1 20180310 20 1
2 1 20180210 45 0
3 1 20170505 1 0
4 2 20180312 14 0
5 2 20180311 15 0
6 3 20180312 20 1
7 3 20180311 20 0
8 4 20170501 13 1
9 5 20180304 11 1
```
**Output**
```
depth feature result
id date
1 20180311 0 10 1
20180311 1 20 1
20180311 2 45 0
20180311 3 1 0
20180310 0 20 1
20180310 1 45 0
20180310 2 1 0
20180210 0 45 0
20180210 1 1 0
20170505 0 1 0
2 20180312 0 14 0
20180312 1 15 0
20180311 0 15 0
3 20180312 0 20 1
20180312 1 20 0
20180311 0 20 0
4 20170501 0 13 1
```
Upvotes: 0
|
2018/03/16
| 1,052 | 3,476 |
<issue_start>username_0: I've an object which contains 100 keys, value pairs.
```
$scope.obj = {
key1: value1,
key1: value1,
.
.
.
key100: value100
}
```
I have 100 inputs
```
.
.
.
<button ng-click="submit>
```
When I sumbit the data will send to server. Some time I'll change values and sometimes not. If I change a value in one input I want that `key,value` from object. So that I can send server call with that data not to send entire data.<issue_comment>username_1: Have a copy of the variable and just before sending it to the server, compare the model object with the copied variable using a compare function like the following:
```
// At the time of initialization
$scope.obj = [ ... ];
var copiedObj = angular.copy($scope.obj);
// At the time of submit
var changed = {};
Object.keys(copiedObj).forEach(key => {
if(copiedObj[key] !== $scope.obj[key]) {
changed[key] = $scope.obj[key];
}
});
// Submit `changed` to the server
```
Assuming `copiedObj` is the copy of original `$scope.obj`, `changed` will contain the keys which are actually changed. So you can send `changed` to the server.
Upvotes: 0 <issue_comment>username_2: You can use [$watch](https://docs.angularjs.org/api/ng/type/$rootScope.Scope#$watch) to watch for specific changes in `$scope`.
```
$scope.$watch('obj.key1', function(newValue, oldValue) {
console.log('Key1 was updated from' + oldValue + ' to ' + newValue');
});
```
Upvotes: 0 <issue_comment>username_3: You can do this way have a ng-change bind to your HTML and whenever the ng-change happens for a particular Element write a code to push into an array and send that array to the server.
A sample Code
```
```
Angular Code
```
$scope.tempValues=[];
$scope.selectedValue = function(x) {
$scope.tempValues.push(x);
}
```
Upvotes: 0 <issue_comment>username_4: You can use `ng-change` to detect any changes, as well as a (key, value) syntax in `ng-repeat` to list your inputs. Here is a demo:
```js
var app = angular.module('myApp', []);
app.controller('myCtrl', function($scope) {
$scope.obj = {
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
var toSend = [];
$scope.select = function(key) {
var s = {};
s[key] = $scope.obj[key]
toSend.push(s); // or just the value: $scope.obj[key]
}
$scope.submit = function() {
console.log(toSend)
$scope.sent = toSend;
}
});
```
```html
Send
```
{{sent | json}}
```
```
Upvotes: 0 <issue_comment>username_5: I would achieve this by using the default `ngForm` directives like `$dirty` and a "save state" object of your original input values. This is a easy solution to achieve what you want. `ngForm` itself does not provide the original values so you have to create a "save object" and compare them manually.
### View
```
```
### AngularJS application
```
var myApp = angular.module('myApp',[]);
myApp.controller('MyCtrl', function ($scope) {
$scope.obj = {
key1: 'test 1',
key2: 'test 2',
key3: 'test 3',
}
var originalValues = angular.copy($scope.obj);
$scope.submit = function () {
var paramsChanged = {};
angular.forEach($scope.obj, function (value, key) {
if ($scope.myForm[key].$dirty && originalValues[key] !== value) {
paramsChanged[key] = value;
}
});
console.log(paramsChanged);
}
});
```
**> [demo fiddle](http://jsfiddle.net/j4dv1akq/)**
Upvotes: 1
|
2018/03/16
| 661 | 2,236 |
<issue_start>username_0: I often have some kind of master-detail situation where i try to use a single model for both master and detail view.
The detail page is bound directly to an element in the list with `.bindElement(path)` when you select an item in the list. The path is available from the binding context. Everyone is happy:
```
//click handler for list item:
var context = this.getBindingContext();
oDetailPage.bindElement(context.getPath());
oApp.toDetail(oDetailPage);
```
The challenge is when the list page has an "add" button. I create a new object and put that into the model. But how do I find the path? i have no context:
```
//click handler for "add" button
var newStuff = {
propA: "foo",
propB: 13
};
oModel.setData(oModel.getData().concat(newStuff));
oDetailPage.bindElement(/* What??? */);
oApp.toDetail(oDetailPage);
```
I've searched for a `.findPath(newStuff)` method but no such thing exists<issue_comment>username_1: **Hi, unfortunatly I am unable to test this but I think this should work**
The binding path is pretty much pointer to a specific entry in a collection. The path value does depend on a couple of variables, such as how your model is structured.
For example, if the Model data looks like this :
```
{"results" : [
{propA: "foo",
propB: 13
},
{propA: "ber",
propB: 14
}]
```
}
And you concat
```
{propA: "newItem",
propB: 15
}
```
I believe you binding path would be
```
"/results/2"
```
You can also, find the most recent index with something like
```
this.getModel("yourModel").getObject("results").length;
```
Edit -
Unless I miss understand your quesitons, your new entry should be at the end of the model.
Upvotes: 3 [selected_answer]<issue_comment>username_2: here are 2 cases
```
//click handler for "add" button
var newStuff = {
propA: "foo",
propB: 13
};
// case 1: root of model data is an array
var oData = oModel.getProperty("/");
oData.push(newStuff);
oDetailPage.bindElement("/" + oData.length - 1);
// case 2: we appends to /results
var oData = oModel.getProperty("/results");
oData.push(newStuff);
oDetailPage.bindElement("/results/" + oData.length - 1);
```
Upvotes: 1
|
2018/03/16
| 990 | 4,375 |
<issue_start>username_0: I'm currently refactoring my code to include ViewModel with LiveData provided by android.arch library. I have a simple activity that sends request for a password change to server and acts according to HTTP response code.
For that purpose I have created class that extends ViewModel for data and a repository class to call server. My ViewModel class has one MutableLiveData field which I'm subscribing to from my activity using .observe(...) method. The issue is that code inside .observe(...) fires all the time after configuration changes (i.e. screen rotation) and I have no idea why.
Here is the code of ViewModel, Repository and Activity classes accordingly:
ChangePasswordViewModel
```
public class ChangePasswordViewModel extends ViewModel{
private MutableLiveData responseCode;
private PasswordChangeRepository passwordChangeRepository;
public ChangePasswordViewModel() {
responseCode = new MutableLiveData<>();
passwordChangeRepository = new PasswordChangeRepositoryImpl();
}
public MutableLiveData responseCodeLiveData() {
return responseCode;
}
public void sendChangePasswordRequest(String newPassword){
passwordChangeRepository.changePassword(newPassword, passChangeCallback());
}
// Callback that fires after server sends a response
private Callback passChangeCallback(){
...
responseCode.postValue(serverResponse)
...
}
```
PasswordChangeRepository
```
public class PasswordChangeRepositoryImpl {
public void changePassword(String newPassword, Callback callback){
//Sending new password to server and processing response in callback
ServerCalls.changePassword(newPassword, callback);
}
}
```
Activity
```
public class ChangePasswordActivity extends AppCompatActivity{
...
private void init(){
//Getting appropriate view model
passwordViewModel = ViewModelProviders.of(this).get(ChangePasswordViewModel.class);
// Starting to observe LiveData
passwordViewModel.getResponseCode().observe(this, responseCode -> {
Log.info("Server response is " + responseCode);
});
//Sending new password to server
buttonPassChange.setOnClickListener(view ->
passwordViewModel.sendChangePasswordRequest("newPass")
);
}
...
}
```
Problem is that after the first time I send request to server using sendChangePasswordRequest(...) observe code in activity
```
passwordViewModel.getResponseCode().observe(this, responseCode -> {
Log.info("Server response is " + responseCode);
});
```
fires every time after I rotate the screen. Why is that happening? Value of MutableLiveData responseCode hasn't been updated since the last server call, so why does .observe() fires if there were no changes to live data?<issue_comment>username_1: That is an intended behavior, as you can see in [documents](https://developer.android.com/reference/android/arch/lifecycle/LiveData.html#observe(android.arch.lifecycle.LifecycleOwner,%20android.arch.lifecycle.Observer%3CT%3E)):
>
> observe (LifecycleOwner owner,
> Observer observer) Adds the given observer to the observers list within the lifespan of the given owner. The events are
> dispatched on the main thread. If LiveData already has data set, it
> will be delivered to the observer.
>
>
>
If you want to observe the change in view state then you should create and observe a view state instead of a network request, google already provided an [example](https://developer.android.com/topic/libraries/architecture/guide.html) for cases like this.
Upvotes: 4 [selected_answer]<issue_comment>username_2: In addition to answer above, it's important to understand the scenarios in which using ViewModel & LiveData observers, to observe only once, this article explains them and shows a way to deal with it easily: [Working with LiveData and Events](https://proandroiddev.com/singleliveevent-to-help-you-work-with-livedata-and-events-5ac519989c70)
Upvotes: 2 <issue_comment>username_3: I have used MutableSharedFlow instead of MutableLiveData, and solved the same problem as yours.
You can try this:
```
private val responseCode = MutableSharedFlow()
...
fun passChangeCallback() {
viewModelScope.launch {
responseCode.emit(serverResponse)
}
```
Because MutableSharedFlow don't replay a value that has already emited by defalut.
Upvotes: 2
|
2018/03/16
| 701 | 2,219 |
<issue_start>username_0: I have a list of dates as strings in the format 'dd/mm/yyyy hh:mm:ss tt' I'm trying to order them by closest to farthest like so:
```
09/12/2018 12:00:00 PM
10/12/2018 12:00:00 PM
11/12/2018 12:00:00 PM
```
My code seems to be ordering them from the 11th to the 09th which I don't want but can't seem to get right.
```
DateTime now = DateTime.Now;
var ordered = herds.HerdList.OrderBy(n => (now - DateTime.Parse(n.Date_Visit)).Duration());
```
The above code gives me:
```
11/12/2018 12:00:00 PM
10/12/2018 12:00:00 PM
09/12/2018 12:00:00 PM
```
How can I order it the other way around?
Thanks<issue_comment>username_1: That is an intended behavior, as you can see in [documents](https://developer.android.com/reference/android/arch/lifecycle/LiveData.html#observe(android.arch.lifecycle.LifecycleOwner,%20android.arch.lifecycle.Observer%3CT%3E)):
>
> observe (LifecycleOwner owner,
> Observer observer) Adds the given observer to the observers list within the lifespan of the given owner. The events are
> dispatched on the main thread. If LiveData already has data set, it
> will be delivered to the observer.
>
>
>
If you want to observe the change in view state then you should create and observe a view state instead of a network request, google already provided an [example](https://developer.android.com/topic/libraries/architecture/guide.html) for cases like this.
Upvotes: 4 [selected_answer]<issue_comment>username_2: In addition to answer above, it's important to understand the scenarios in which using ViewModel & LiveData observers, to observe only once, this article explains them and shows a way to deal with it easily: [Working with LiveData and Events](https://proandroiddev.com/singleliveevent-to-help-you-work-with-livedata-and-events-5ac519989c70)
Upvotes: 2 <issue_comment>username_3: I have used MutableSharedFlow instead of MutableLiveData, and solved the same problem as yours.
You can try this:
```
private val responseCode = MutableSharedFlow()
...
fun passChangeCallback() {
viewModelScope.launch {
responseCode.emit(serverResponse)
}
```
Because MutableSharedFlow don't replay a value that has already emited by defalut.
Upvotes: 2
|
2018/03/16
| 760 | 2,314 |
<issue_start>username_0: ```
#!/usr/bin/perl
print "content-type:text/html\n\n";
print "";
use CGI;
use DBI;
use CGI qw(:standard);
use CGI::Carp qw(fatalsToBrowser);
$db="New";
$user="root";
$password="<PASSWORD>";
$host="localhost";
$dbh=DBI->connect("DBI:mysql:database=$db:$host",$user,$password) ||
die "couldnt open database:$DBI :: errstr";
$sth=$dbh->prepare("select \* from stud");
$rv=$sth->execute();
print "
";
print "| id |
| name |
| age |
";
while (@row = $sth->fetchrow\_array()) {
$id=$row[0];
$name=$row[1];
$age=$row[2];;
}
print "| $id |
| $name |
| $age |
";
print "
";
$rc=$sth->finish();
print "Database closed";
print "";
```
I am trying to connect the mysql database using Perl. I am using a Mac OS and it says "Couldn't connect database".
The code im using is given above. Please help me out.<issue_comment>username_1: Using CGI is an unnecessary distraction here. I recommend that you try writing a non-CGI program that connects to the database first and, when you get that working, you can copy the same connection code into your CGI program.
In fact, even before that, you should ensure that you can use your connection details to connect to the MySQL server using the standard MySQL tools.
What happens if you open a command line window and run the following?:
```
mysql -h localhost -u root -p -D New
```
Almost certainly, this is a problem with connecting to the database and has nothing to do with Perl or CGI.
Upvotes: 2 <issue_comment>username_2: Look at the [DBI documentation](http://search.cpan.org/~timb/DBI-1.640/DBI.pm#connect)
>
>
> ```
> $dbh = DBI->connect($data_source, $username, $password, \%attr)
> or die $DBI::errstr;
>
> ```
>
>
Compare to your code:
>
>
> ```
> $dbh=DBI->connect("DBI:mysql:database=$db:$host",$user,$password)
> || die "couldnt open database:$DBI :: errstr";
>
> ```
>
>
You've put `$DBI::errstr` *inside* a string literal, and added spaces.
This means your error message reporting code is going to try to try report `$DBI` (followed by `:: errstr`) instead of the variable containing the error.
Fix that and you can find out what is actually wrong:
```
$dbh=DBI->connect("DBI:mysql:database=$db:$host",$user,$password)
|| die "couldnt open database:" . $DBI::errstr;
```
Upvotes: 2
|
2018/03/16
| 585 | 1,912 |
<issue_start>username_0: I stored "hello,world" in a mysql database as an array and then tried to retrieve using below query
```
@php
$post_id = $post->id;
$result= DB::table('posts')
->select('tags')
->where('id', '=', $post_id)
->get();
echo($result);
@endphp
```
The echoed result is:
```
[{"tags":"hello,world"}]
```
I need to echo single values for above result
```
hello
world
```<issue_comment>username_1: Using CGI is an unnecessary distraction here. I recommend that you try writing a non-CGI program that connects to the database first and, when you get that working, you can copy the same connection code into your CGI program.
In fact, even before that, you should ensure that you can use your connection details to connect to the MySQL server using the standard MySQL tools.
What happens if you open a command line window and run the following?:
```
mysql -h localhost -u root -p -D New
```
Almost certainly, this is a problem with connecting to the database and has nothing to do with Perl or CGI.
Upvotes: 2 <issue_comment>username_2: Look at the [DBI documentation](http://search.cpan.org/~timb/DBI-1.640/DBI.pm#connect)
>
>
> ```
> $dbh = DBI->connect($data_source, $username, $password, \%attr)
> or die $DBI::errstr;
>
> ```
>
>
Compare to your code:
>
>
> ```
> $dbh=DBI->connect("DBI:mysql:database=$db:$host",$user,$password)
> || die "couldnt open database:$DBI :: errstr";
>
> ```
>
>
You've put `$DBI::errstr` *inside* a string literal, and added spaces.
This means your error message reporting code is going to try to try report `$DBI` (followed by `:: errstr`) instead of the variable containing the error.
Fix that and you can find out what is actually wrong:
```
$dbh=DBI->connect("DBI:mysql:database=$db:$host",$user,$password)
|| die "couldnt open database:" . $DBI::errstr;
```
Upvotes: 2
|
2018/03/16
| 398 | 1,676 |
<issue_start>username_0: [](https://i.stack.imgur.com/hnLcV.png)
I have a one to many relationship User -> Weight as shown on the screenshot. I am able to extract sectioned data from an entity using NSFetchRequest and NSFetchedResultsController and then show it on a Table view with sections.
I am able to extract weight data via the User entity. But my question is, how do I extract that weight data such that I could display it on a table view with sections similar to how NSFetchedResultsController does?
Any help would be appreciated. Thanks in advance!<issue_comment>username_1: A `NSFetchedResultsController` can interact with a collectionView or tableView with its objectAtIndexPath and sections property. But you don't need to use it that way. You can create a `NSFetchedResultsController` on just the `User` objects with no sectionNameKeyPath. The number of sections is the the fetchedResultsController number of items. And the number of items in each section is equal to each items's number of weights. likewise when you get the delegated callbacks for changes convert the fetchedResultsController's indexPaths to sections.
Upvotes: 1 <issue_comment>username_2: Ok, so I figured out the best approach to this. Instead of getting the weight data by querying the User entity, I queried the Weight entity instead and used a predicate to return only the weight data of a specific user.
Something like this:
func requestFetch(ofUser selectedUser: User) -> NSFetchRequest{
...
let userPredicate = NSPredicate(format: "user == %@", selectedUser)
...
return fetchRequest
}
Upvotes: 1 [selected_answer]
|
2018/03/16
| 598 | 2,251 |
<issue_start>username_0: I have a docker-compose setup something like:
```
/
- sources/
- docker-compose.yml
- Dockerfile
- .dockerignore
- many more files
```
The Dockerfile contains instructions including a COPY command of the sources.
Because of all the different tools, including multiple docker setups, I'd like to organise it a bit, by either moving all files to a folder:
```
/
- sources/
- docker/
- many more files
```
or leaving just the docker-compose.yml file outside of this folder:
```
/
- sources/
- docker/
- many more files
```
I'd like to do this because:
* It cleans up the project folder
* I currently have multiple docker setups in the project folder, moving them to seperate folders allows for a more clear and/or precise setup (e.g. multiple dockerignore files)
Currently I am running into some issues which do make sense, such as:
```
COPY failed: Forbidden path outside the build context: ../sources/
```
Is it possible to achieve this setup? Thanks!<issue_comment>username_1: Inside the Dockerfile, you cannot access files that are outside the build context. In your case the build context is the directory containing the Dockerfile.
You can change the build context inside the composefile.
Below is an example where the composefile is at the root and `Dockerfile` is under `docker` folder:
```
version: '3'
services:
test:
build:
context: .
dockerfile: docker/Dockerfile
```
In this case, inside the `Dockerfile` the file paths should be set relative to the context.
```
COPY sources sources
```
**For dockerignore**:
As specified in the docs for .dockerignore file:
>
> Before the docker CLI sends the context to the docker daemon, it looks for a file named .dockerignore in the **root directory of the context**
>
>
>
Thus you need to add the dockerignore file to the root of the context.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can't use that within the Dockerfile however you should be able to make it work using a .env file and pulling it in from there.
<https://docs.docker.com/compose/env-file/>
You could try something like:
.env
====
`SOURCE_PATH=../sources/`
Dockerfile
==========
`COPY ${SOURCE_PATH}/myfile /some/destination`
Upvotes: 0
|
2018/03/16
| 464 | 1,600 |
<issue_start>username_0: So I'm trying to make a basic website and I need to display an image when I click on a button. The problem is that it displays me the image but when I click on another button to diplay me an another image the precedent image stays.
here is my code. I'm using Angular 4 and typescript.
component.ts
```
openPredictionImg() {
const myImg = new Image();
myImg.useMap = '../../assets/prediction.png';
const img = document.createElement('img');
document.body.appendChild(img);
img.setAttribute('src', myImg.useMap);
img.onload = (stop);}
openRebalancingImg() {
const myImg = new Image();
myImg.useMap = '../../assets/rebalancing.png';
const img = document.createElement('img');
document.body.appendChild(img);
img.setAttribute('src', myImg.useMap);
img.onload = (stop);}
openVisualizationImg() {
const myImg = new Image();
myImg.useMap = '../../assets/visualization.png';
const img = document.createElement('img');
document.body.appendChild(img);
img.setAttribute('src', myImg.useMap);
img.onload = (stop);}
```
component.html
```
Prediction
Rebalancing
Visualization
```<issue_comment>username_1: In Angular, you don't touch the DOM directly.
The simplest way I can think of :
```
Prediction
Rebalancing
Visualization
![]()
```
In your Typescript, remove all of your previous code and simply declare a variable
```
imageSource: string;
```
Upvotes: 1 <issue_comment>username_2: Ok here is the solution
component.ts
```
text = '';
openPrediction() {
this.text = '...' }
```
component.html
```
Prediction
![]()
{{ text }}
```
Upvotes: 1 [selected_answer]
|
2018/03/16
| 1,473 | 6,043 |
<issue_start>username_0: ```
public static class ApplicationUtils
{
public static bool IsCurrentUserAManager()
{
var username = WindowsIdentity.GetCurrent().Name;
bool inAdmin;
if (username == "AdminUser") {
inAdmin = true;
} else {
inAdmin = false;
}
return inAdmin;
}
}
```
Above is some code that is used to test if the currently logged in user is an Admin, I want to unit test this section by passing in a different username and test if the result is correct.
I have heard that dependency injection would be the best way to do this. But I have no idea how to dependency inject into a static class and a static method.
Can anyone help me fill out the TestMethod below in order to pass in a username and test the method?
(**Not using enterprise**)
```
[TestMethod]
public void IsCurrentUserAManagerTestIsAdmin()
{
}
```<issue_comment>username_1: When your code is difficult to test, changing the code is a viable option!
In this case, consider having the `IsCurrentUserAManager` receive the username as an input parameter (and rename it to `IsUserAManager` to reflect the change in behavior). It would look something like this:
```
public static class ApplicationUtils
{
public static bool IsUserAManager(string username)
{
bool inAdmin;
if (username == "AdminUser") {
inAdmin = true;
} else {
inAdmin = false;
}
return inAdmin;
}
}
```
This will allow you to send in different values for testing different scenarios. If however the entire class as it is cannot appear in the UT (due to environment constraints on initialization, for example), consider having just the business logic exported to a seperate non static class and write your UT for that.
Upvotes: 2 <issue_comment>username_2: ```
public static class ApplicationUtils
{
// this is not testable, because I dont use DI
public static bool IsCurrentUserAManager() => TestableMethod(WindowsIdentity.GetCurrent().Name);
// This is testable because it contains logics (if-else)
public static bool TestableMethod(string username) => username == "AdminUser";
}
[TestMethod]
public void IsCurrentUserAManagerTestIsAdmin()
{
Assert.IsTrue(ApplicationUtils.TestableMethod("AdminUser"));
Assert.IsFalse(ApplicationUtils.TestableMethod("adminuser"));
}
```
Upvotes: 1 <issue_comment>username_3: Refactor your class a little to take an identity as a parameter.
```
public static class ApplicationUtils
{
public static bool IsUserAManager(IIdentity identity)
{
if (identity == null)
throw new NullReferenceException("identity");
return identity.Name == "AdminUser";
}
}
```
And Your Test Class using [Moq](https://www.nuget.org/packages/moq/)
```
[TestMethod]
public void IsUserAManagerTestIsAdminReturnsFalse()
{
var mockedIdentity = new Moq.Mock();
mockedIdentity.Setup(x => x.Name).Returns("notanadmin");
var result = ApplicationUtils.IsUserAManager(mockedIdentity.Object);
Assert.IsFalse(result);
}
[TestMethod]
public void IsUserAManagerTestIsAdminReturnsTrue()
{
var mockedIdentity = new Moq.Mock();
mockedIdentity.Setup(x => x.Name).Returns("AdminUser");
var result = ApplicationUtils.IsUserAManager(mockedIdentity.Object);
Assert.IsTrue(result);
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_4: One should try to avoid coupling code to static classes as they are difficult to test.
That said, with your current code as is, it can be refactored to allow certain separations of concerns and a more fluent API via extension methods.
```
public static class ApplicationUtils {
public static Func userFactory = () => WindowsIdentity.GetCurrent();
public static IIdentity CurrentUser { get { return userFactory(); } }
public static bool IsManager(this IIdentity identity) {
return identity != null && string.Compare(identity.Name, "AdminUser", true) == 0;
}
public static bool IsAuthenticated(this IIdentity identity) {
return identity != null && identity.IsAuthenticated;
}
}
```
The following test is used as an example to demonstrate how the above is used.
*Moq was used as mocking framework*
```
[TestMethod]
public void IsManager_Should_Return_True_For_AdminUser() {
//Arrange
var name = "AdminUser";
var identity = Mock.Of(\_ => \_.Name == name);
ApplicationUtils.userFactory = () => identity;
//Act
var actual = ApplicationUtils.CurrentUser.IsManager();
//Assert
Assert.IsTrue(actual);
}
```
That done I would now like to suggest you refactor your code to make it more SOLID.
Abstract the functionality of getting the current user out into a service provider.
```
public interface IIdentityProvider {
IIdentity CurrentUser { get; }
}
```
Pretty simple with an even simpler implementation.
```
public class DefaultIdentityProvider : IIdentityProvider {
public IIdentity CurrentUser {
get { return WindowsIdentity.GetCurrent(); }
}
}
```
If using DI you can now inject the provider into dependent classes that have need to access the current user.
This allows the code to more flexible and maintainable as mocks/stubs of the provider and user can be used for isolated unit tests. The extension methods remain the same as they have very simple concerns.
Here is a simple example of a test for the extension method from earlier.
```
[TestMethod]
public void IsManager_Should_Return_True_For_AdminUser() {
//Arrange
var name = "AdminUser";
var identity = Mock.Of(\_ => \_.Name == name);
var provider = Mock.Of(\_ => \_.CurrentUser == identity);
//Act
var actual = provider.CurrentUser.IsManager();
//Assert
Assert.IsTrue(actual);
}
```
Purely for demonstrative purposes, the test for the `IsManager` extension method only really needs an `IIdentity` to be exercised.
```
Assert.IsTrue(Mock.Of(\_ => \_.Name == "AdminUser").IsManager());
```
Upvotes: 3
|
2018/03/16
| 1,451 | 5,543 |
<issue_start>username_0: I am reading a csv file in R and it has date column.I am using
```
as.Date(dat$date, format ="%d-%m-%Y")
```
But i am getting dates in
```
0012-02-14
```
with the year 2012 described as 0012. How to deal with this error.
I also tried lubridate package but no results
```
col1 col2 policydate
112345 Renew 02/28/2012
156566 Not Renew 03/25/2010
895414 Renew 10/01/2006
```
Something like this.<issue_comment>username_1: When your code is difficult to test, changing the code is a viable option!
In this case, consider having the `IsCurrentUserAManager` receive the username as an input parameter (and rename it to `IsUserAManager` to reflect the change in behavior). It would look something like this:
```
public static class ApplicationUtils
{
public static bool IsUserAManager(string username)
{
bool inAdmin;
if (username == "AdminUser") {
inAdmin = true;
} else {
inAdmin = false;
}
return inAdmin;
}
}
```
This will allow you to send in different values for testing different scenarios. If however the entire class as it is cannot appear in the UT (due to environment constraints on initialization, for example), consider having just the business logic exported to a seperate non static class and write your UT for that.
Upvotes: 2 <issue_comment>username_2: ```
public static class ApplicationUtils
{
// this is not testable, because I dont use DI
public static bool IsCurrentUserAManager() => TestableMethod(WindowsIdentity.GetCurrent().Name);
// This is testable because it contains logics (if-else)
public static bool TestableMethod(string username) => username == "AdminUser";
}
[TestMethod]
public void IsCurrentUserAManagerTestIsAdmin()
{
Assert.IsTrue(ApplicationUtils.TestableMethod("AdminUser"));
Assert.IsFalse(ApplicationUtils.TestableMethod("adminuser"));
}
```
Upvotes: 1 <issue_comment>username_3: Refactor your class a little to take an identity as a parameter.
```
public static class ApplicationUtils
{
public static bool IsUserAManager(IIdentity identity)
{
if (identity == null)
throw new NullReferenceException("identity");
return identity.Name == "AdminUser";
}
}
```
And Your Test Class using [Moq](https://www.nuget.org/packages/moq/)
```
[TestMethod]
public void IsUserAManagerTestIsAdminReturnsFalse()
{
var mockedIdentity = new Moq.Mock();
mockedIdentity.Setup(x => x.Name).Returns("notanadmin");
var result = ApplicationUtils.IsUserAManager(mockedIdentity.Object);
Assert.IsFalse(result);
}
[TestMethod]
public void IsUserAManagerTestIsAdminReturnsTrue()
{
var mockedIdentity = new Moq.Mock();
mockedIdentity.Setup(x => x.Name).Returns("AdminUser");
var result = ApplicationUtils.IsUserAManager(mockedIdentity.Object);
Assert.IsTrue(result);
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_4: One should try to avoid coupling code to static classes as they are difficult to test.
That said, with your current code as is, it can be refactored to allow certain separations of concerns and a more fluent API via extension methods.
```
public static class ApplicationUtils {
public static Func userFactory = () => WindowsIdentity.GetCurrent();
public static IIdentity CurrentUser { get { return userFactory(); } }
public static bool IsManager(this IIdentity identity) {
return identity != null && string.Compare(identity.Name, "AdminUser", true) == 0;
}
public static bool IsAuthenticated(this IIdentity identity) {
return identity != null && identity.IsAuthenticated;
}
}
```
The following test is used as an example to demonstrate how the above is used.
*Moq was used as mocking framework*
```
[TestMethod]
public void IsManager_Should_Return_True_For_AdminUser() {
//Arrange
var name = "AdminUser";
var identity = Mock.Of(\_ => \_.Name == name);
ApplicationUtils.userFactory = () => identity;
//Act
var actual = ApplicationUtils.CurrentUser.IsManager();
//Assert
Assert.IsTrue(actual);
}
```
That done I would now like to suggest you refactor your code to make it more SOLID.
Abstract the functionality of getting the current user out into a service provider.
```
public interface IIdentityProvider {
IIdentity CurrentUser { get; }
}
```
Pretty simple with an even simpler implementation.
```
public class DefaultIdentityProvider : IIdentityProvider {
public IIdentity CurrentUser {
get { return WindowsIdentity.GetCurrent(); }
}
}
```
If using DI you can now inject the provider into dependent classes that have need to access the current user.
This allows the code to more flexible and maintainable as mocks/stubs of the provider and user can be used for isolated unit tests. The extension methods remain the same as they have very simple concerns.
Here is a simple example of a test for the extension method from earlier.
```
[TestMethod]
public void IsManager_Should_Return_True_For_AdminUser() {
//Arrange
var name = "AdminUser";
var identity = Mock.Of(\_ => \_.Name == name);
var provider = Mock.Of(\_ => \_.CurrentUser == identity);
//Act
var actual = provider.CurrentUser.IsManager();
//Assert
Assert.IsTrue(actual);
}
```
Purely for demonstrative purposes, the test for the `IsManager` extension method only really needs an `IIdentity` to be exercised.
```
Assert.IsTrue(Mock.Of(\_ => \_.Name == "AdminUser").IsManager());
```
Upvotes: 3
|
2018/03/16
| 602 | 1,861 |
<issue_start>username_0: I'm writing some code in R and have around 600 lines of functions right now and want to know if there is an easy way to check, if any of my functions is using global variables (which I DON'T want).
For example it could give me an error if sourcing this code:
```
example_fun<-function(x){
y=x*c
return(y)
}
x=2
c=2
y=example_fun(x)
WARNING: Variable c is accessed from global workspace!
```
**Solution to the problem with the help of @Hugh:**
===================================================
```
install.packages("codetools")
library("codetools")
x = as.character(lsf.str())
which_global=list()
for (i in 1:length(x)){
which_global[[x[i]]] = codetools::findGlobals(get(x[i]), merge = FALSE)$variables
}
```
**Results will look like this:**
--------------------------------
```
> which_global
$arrange_vars
character(0)
$cal_flood_curve
[1] "..count.." "FI" "FI_new"
$create_Flood_CuRve
[1] "y"
$dens_GEV
character(0)
...
```<issue_comment>username_1: What about emptying your global environment and running the function? If an object from the global environment were to be used in the function, you would get an error, e.g.
```
V <- 100
my.fct <- function(x){return(x*V)}
> my.fct(1)
[1] 100
#### clearing global environment & re-running my.fct <- function... ####
> my.fct(1)
Error in my.fct(1) : object 'V' not found
```
Upvotes: -1 <issue_comment>username_2: For a given function like `example_function`, you can use package `codetools`:
```
codetools::findGlobals(example_fun, merge = FALSE)$variables
#> [1] "c"
```
To collect all functions see [Is there a way to get a vector with the name of all functions that one could use in R?](https://stackoverflow.com/questions/4267744/is-there-a-way-to-get-a-vector-with-the-name-of-all-functions-that-one-could-use)
Upvotes: 3 [selected_answer]
|
2018/03/16
| 418 | 1,384 |
<issue_start>username_0: In Effective Java 3rd edition, on page 50, author has talked about total time an object lasted from its creation to the time it was garbage collected.
>
> On my machine, the time to create a simple AutoCloseable object, to close
> it using try-with-resources, and to have the garbage collector reclaim it is about
> 12 ns. Using a finalizer instead increases the time to 550 ns.
>
>
>
How can we calculate such time? Is there some reliable mechanism for calculating this time?<issue_comment>username_1: What about emptying your global environment and running the function? If an object from the global environment were to be used in the function, you would get an error, e.g.
```
V <- 100
my.fct <- function(x){return(x*V)}
> my.fct(1)
[1] 100
#### clearing global environment & re-running my.fct <- function... ####
> my.fct(1)
Error in my.fct(1) : object 'V' not found
```
Upvotes: -1 <issue_comment>username_2: For a given function like `example_function`, you can use package `codetools`:
```
codetools::findGlobals(example_fun, merge = FALSE)$variables
#> [1] "c"
```
To collect all functions see [Is there a way to get a vector with the name of all functions that one could use in R?](https://stackoverflow.com/questions/4267744/is-there-a-way-to-get-a-vector-with-the-name-of-all-functions-that-one-could-use)
Upvotes: 3 [selected_answer]
|
2018/03/16
| 877 | 2,777 |
<issue_start>username_0: I have a php function to get dates for a morris pie chart and it was working fine.
But now that I have more data (date for the last year and the first 3 months of this year). It's now displaying duplicate months. In this case february of last year and this years data is now showing on the same pie chart.
I'd like to write a some mysql code in php that will only display the last 12 months. I have the following code:
```
function writesql($rec) {
$year = date('Y') -1;
$month = date('m');
$lastyear = $year - $month; // I know this is a problem.It's subtracting the two variables
$sql = "";
$sql = $sql . " SELECT";
$sql = $sql . " YEAR(`value`)as 'Year',";
$sql = $sql . " MONTH(`value`)as 'Month',";
$sql = $sql . " `value` , ";
$sql = $sql . " COUNT(`value`) as 'Calls' ,";
$sql = $sql . " ROUND(SUM( `value` ),2) as 'Value'";
$sql = $sql . " FROM `table`";
$sql = $sql . " GROUP BY";
$sql = $sql . " YEAR(`value`),";
$sql = $sql . " MONTH(`value`)" ;
$sql = $sql . " ORDER BY";
$sql = $sql . " YEAR(`value`),";
$sql = $sql . " MONTH(`value`)";
$sql = $sql." WHERE (`value`)='".$lastyear."'";// I also know this is wrong too but im lost as to how to fix it.
return $sql;
}
```
Then the rest follows, I only want to get the data for the last 12 months how should I be executing this.<issue_comment>username_1: You are close. Make these changes
```
$year = date('Y') -1; // This will give last year
$sql." WHERE YEAR(`value`)='".$year."'"
```
**Updated Query**
```
$sql = "
SELECT YEAR(`value`) as 'Year', MONTH(`value`) as 'Month', `value`,
COUNT(`value`) as 'Calls', ROUND(SUM( `value` ),2) as 'Value'
FROM `table`
WHERE YEAR(`value`) = '$year'
GROUP BY YEAR(`value`), MONTH(`value`)
ORDER BY YEAR(`value`), MONTH(`value`)";
```
Upvotes: 0 <issue_comment>username_2: You could do `WHERE value >= DATE_SUB(CURDATE(),INTERVAL 1 YEAR);`
This would show 12 months up to the current date.
Upvotes: 2 <issue_comment>username_3: First off, I find this easier to read:
```
$sql = "
SELECT YEAR(value) Year
, MONTH(value) Month
, value
, COUNT(value) Calls
, ROUND(SUM(value),2) Value
FROM `table`
GROUP
BY YEAR(value)
, MONTH(value)
ORDER
BY YEAR(value)
, MONTH(value)
WHERE (value) = '".$lastyear."'
";
```
But this query is syntactically incorrect. So here's a syntactically correct version:
```
$sql = "
SELECT YEAR(`value`) Year
, MONTH(`value`) Month
, COUNT(value) Calls
, ROUND(SUM(value),2) Total_Value
FROM `table`
WHERE value = '".$lastyear."'
GROUP
BY YEAR(value)
, MONTH(value)
ORDER
BY YEAR(value)
, MONTH(value);
";
```
Now see about prepared and bound queries
Upvotes: 0
|
2018/03/16
| 632 | 2,369 |
<issue_start>username_0: I submitted a PWA to the Microsoft Store and got the following notes on my submission:
>
> 10.8.5
> Your app or app metadata includes links that promote installation or
> purchase of software outside the Store.
> Your app may promote or distribute software only through the Microsoft Store.
>
>
>
The reason for this is that my web app has a products page with links to the various platforms the app is available on. This is so that users visiting my web app using a browser get the ability to install it as "native" app on their platform rather.
How can I detect if my web app is running as PWA through the Microsoft Store, so that I can render a trimmed version of my app without the products page?
My first idea would be to check the `navigator.userAgent`, but this seems ambiguous, since the user agent will be Microsoft Edge whether the app is running "natively" or is visited manually in the browser.
I'd prefer solutions for distinguishing these use cases in JavaScript, but I'm also open for completely different approaches.<issue_comment>username_1: I realized that the user agent wasn't so ambiguous after all.
*Microsoft Edge* does indicate when it is running as an app host by adding `MSAppHost/` to its user agent.
Example:
--------
On my machine, the user agent of my hosted PWA lists "**MSAppHost/3.0**":
```
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; MSAppHost/3.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36 Edge/15.15063
```
Solution:
---------
Testing `navigator.userAgent` for the string **`MSAppHost/`** discloses if the web app is running as an **hosted app**.
I'm using this check now for my server-side and client-side rendering to strip any content linking to external stores.
Upvotes: 1 <issue_comment>username_2: The correct official way is to check for `window.Windows`. The whole WinRT API surface is injected when running as a Store app. So, you can (and should) do the following rather than user-agent sniffing:
```
if (window.Windows) {
// running as a Windows app
}
else {
// running in browser
}
```
Upvotes: 2 <issue_comment>username_3: You can check `document.referrer`. If it equals to "app-info://platform/microsoft-store", your Web app was installed from Microsoft Store. This feature was first introduced in Edge version 91.
Upvotes: 1
|
2018/03/16
| 651 | 2,236 |
<issue_start>username_0: I have a dataframe which is of the following structure:
```
A B
Location1 1
Location2 2
1 3
2 4
```
In the above example column A is the index. I am attempting to produce a scatter plot using the index and column B. This data frame is made by resampling and averaging another dataframe like so:
```
df = df.groupby("A").mean()
```
Now obviously this sets the index equal to column A and I can plot it using the following which is adapted from here. [Use index in pandas to plot data](https://stackoverflow.com/questions/20084487/use-index-in-pandas-to-plot-data)
```
df.reset_index().plot(x = "A",y = "B",kind="scatter", figsize=(10,10))
```
Now when I run this it returns the follow:
```
ValueError: scatter requires x column to be numeric
```
As the index column is intended to be a column of strings for which I can plot a scatter plot how can I go about fixing this?<issue_comment>username_1: I realized that the user agent wasn't so ambiguous after all.
*Microsoft Edge* does indicate when it is running as an app host by adding `MSAppHost/` to its user agent.
Example:
--------
On my machine, the user agent of my hosted PWA lists "**MSAppHost/3.0**":
```
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; MSAppHost/3.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36 Edge/15.15063
```
Solution:
---------
Testing `navigator.userAgent` for the string **`MSAppHost/`** discloses if the web app is running as an **hosted app**.
I'm using this check now for my server-side and client-side rendering to strip any content linking to external stores.
Upvotes: 1 <issue_comment>username_2: The correct official way is to check for `window.Windows`. The whole WinRT API surface is injected when running as a Store app. So, you can (and should) do the following rather than user-agent sniffing:
```
if (window.Windows) {
// running as a Windows app
}
else {
// running in browser
}
```
Upvotes: 2 <issue_comment>username_3: You can check `document.referrer`. If it equals to "app-info://platform/microsoft-store", your Web app was installed from Microsoft Store. This feature was first introduced in Edge version 91.
Upvotes: 1
|
2018/03/16
| 480 | 2,060 |
<issue_start>username_0: If I have some project in my virtualbox's ubuntu, what is right way to upload to github?
Of course I need frequent source revision, so after edit, I should compile(qmake, make) or not?
Just source revise and then upload it is enough?
Then revision at github directly is enough?
Those guides does not says about local repository's source's compiling.
Someone said that I should not compile source before push to git.
For push to git, he said I should do with sources [BEFORE] compiling.
What is the reason?
I saw lots of guide, but
Those guides does not says about local repository's source's compiling.<issue_comment>username_1: Here is a description how to upload to github:
<https://help.github.com/articles/adding-an-existing-project-to-github-using-the-command-line/>
The other questions you need to answer for yourself.
Upvotes: 0 <issue_comment>username_2: As a general practice you want your git repository to have your sources, and any scripts needed to compile and configure your project. This way someone else on your team can pull the repository and be able to compile and run the project on their own machine. You might also want an official build machine that does this.
You also want to compile and test your changes before you commit and push to github, since you want to confirm that your changes work.
Once you compile the code yourself, you will now have various binaries locally that you don't want to upload (For example if your project is Java, you don't want to include the jars in your repository). You can use a .gitignore file to exclude those from being uploaded to github. There are examples for many different projects here: <https://github.com/github/gitignore>
If you don't have a .gitignore file, then all of your local changes will be included when you commit and push to github.
So if you have all of your source files, project files, build scripts, and a .gitignore appropriate for your project, then you can commit and push before and after compiling.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 260 | 996 |
<issue_start>username_0: I can build and run the app in the emulator but when I try and generate signed apk the following happens.
```
Error:Execution failed for task ':app:lintVitalRelease'.
> Invalid main APK outputs : BuildOutput{apkInfo={type=MAIN, versionCode=0, filters=[]}, path=C:\Users\datan\StudioProjects\gamex\app\release\app-release.apk, properties=},BuildOutput{apkInfo={type=MAIN, versionCode=0, filters=[]}, path=C:\Users\datan\StudioProjects\gamex\app\release\space-release.apk, properties=}
```<issue_comment>username_1: `config` in `build.gradle`:
```
lintOptions {
checkReleaseBuilds false
abortOnError false
}
```
Upvotes: 1 <issue_comment>username_2: try below one
```
buildTypes {
release {
lintOptions {
disable 'MissingTranslation'
checkReleaseBuilds false
abortOnError false
}
minifyEnabled false
signingConfig signingConfigs.release
}
}
```
Upvotes: 4 [selected_answer]
|
2018/03/16
| 427 | 1,360 |
<issue_start>username_0: Here i want to take the count like same values in table,let say example in my table `Accomodation` is two time are there so,Accomodation count should come 2 and remaining count is 1
>
> getting\_fecilities
>
>
>
```
fid eventId fecilityName
1 5 Accomodation
2 5 Breakfast
3 5 Lunch
4 5 Dinner
5 6 Food(VEG)
6 5 Parking
7 5 Accomodation
```
>
> MYSQL
>
>
>
```
SELECT `fecilityName`,COUNT(*) AS count FROM getting_fecilities WHERE eventId=5 group by `fecilityName`
```
>
> MY Result
>
>
>
```
fecilityName count
Accomdation 1
Accomdation 1
Breakfast 1
Dinner 1
Lunch 1
Parking 1
```
>
> My Expected results
>
>
>
```
fecilityName count
Accomdation 2
Breakfast 1
Dinner 1
Lunch 1
Parking 1
```<issue_comment>username_1: Your query is giving the Expected Result.
Have a look at it at the below link:
>
> <http://sqlfiddle.com/#!9/ba6457/4>
>
>
>
Upvotes: 1 <issue_comment>username_2: ```
SELECT `fecilityName`, COUNT(fecilityName) AS count
FROM getting_fecilities WHERE eventId = 5
group by `fecilityName`;
```
Upvotes: 0
|
2018/03/16
| 1,002 | 3,314 |
<issue_start>username_0: I’ve worked with SAS and SQL previously I’m trying to get into R via a course. I’ve been set the following task by my tutor:
“Using the Iris dataset, write an R function that takes as its arguments an Iris species and attribute name and returns the minimum and maximum values of the attribute for that species.”
Which sounded straightforward at first, but I’ve come unstuck trying to make the function. The below is as far as I've gotten
```
#write the function
question_2 <- function(x, y, data){
new_table <- subset(data, Species==x)
themin <-min(new_table$y)
themax <-max(new_table$y)
return(themin)
return(themax)}
#test the function - Species , Attribute, Data
question_2("setosa",Sepal.Width, iris)
```
I assumed I needed quotes around the species when running the function, but I get an error that there were "no non-missing arguments to min/max", which I'm guessing means my attempt at making 'new\_table' has brought back zero observations.
Can anyone see where I'm going wrong?
edit: thanks all for the swift and insightful responses. i'll take that reading on board. thanks again!<issue_comment>username_1: Indeed, your teacher didn't give you the easiest thing to do in R. You were almost right. You can't return twice in a function.
```
question_2 <- function(x, y, data){
new_table <- subset(data, Species==x)
themin <-min(new_table[[y]])
themax <-max(new_table[[y]])
return(list(themin, themax))}
question_2("setosa","Sepal.Width", iris)
```
Upvotes: 2 <issue_comment>username_2: `df$colname` cannot be used with a variable to the right of `$`, because it will search for the column named `"colname"` (`"y"` in your case) rather than the character the variable `colname` (if it even exists) represents.
The syntax `df[["colname"]]` is useful in this case because it allows for character input (which may also be a variable representing a character). This holds for both object types `list` and `data.frame`. In fact, a `data.frame` can be seen as a list of vectors.
---
**Example**
```
df <- data.frame(col1 = 5:7, col2 = letters[1:3])
a <- "col1"
# $ vs [[
df$col1 # works because "col1" is a column of df
df$a # does not work because "a" is not a column of df
df[["col1"]] # works because "col1" is a column of df
df[[a]] # works because "col1" is a column of df
# dataframes can be seen as list of vectors
ls <- list(col1 = 5:7, col2 = letters[1:3])
ls$col1 # works
ls[[a]] # works
```
Upvotes: 1 <issue_comment>username_3: One problem is that `Sepal.Width` seems to be some object in the workspace. Otherwise R would yell at you `Object "Sepal.Width" not found.`. Whatever `Sepal.Width` (the object) is, it is probably not a character string with the value `"Sepal.Width"`. But even if it were, R would not know how to use the `$` operator to get that named column from `new_table`, not without some needlessly advanced programming. @username_1's suggestion of using `[[` is a good one.
You must pass `y` as `"Sepal.Width"`.
Another approach: you can take advantage of `subset` by writing this:
```
question_2 <- function(x, y, data){
newy <- subset(data, subset=Species==x, select=y)
themin <-min(newy)
themax <-max(newy)
return(c(themin, themax))
```
}
```
question_2("setosa","Sepal.Width", iris)
```
Upvotes: 0
|
2018/03/16
| 1,064 | 4,038 |
<issue_start>username_0: I have a sync controller method
```
public IActionResult Action(int num)
{
//operations-1
Task.Run(() => DoWork(num));
//operations-2
int a = num + 123;
return Ok(a);
}
```
and DoWork method
```
private bool DoWork(int num)
{
//operations
return true;
}
```
What I'm trying to do is to run DoWork method in background when calling that endpoint from Postman, but I want to get result in Postman and then debug DoWork method (from breakpoint in DoWork method) - is it possible?
For that moment, controller action and DoWork() are executing simultaneously but when I reach
```
return Ok(a);
```
applications waits for DoWork instead of returning value. I have tried also
```
Task.Factory.StartNew(() => DoWork());
ThreadPool.QueueUserWorkItem(o => DoWork());
```
but result is the same.
I want DoWork method to return value but that value is not neccessary by controller action method - it will be used in different place, not connected with that.<issue_comment>username_1: Tasks are [high level threads](https://stackoverflow.com/a/4130204/5397642) that make sure you are not blocking any context.
You either want to use something like [RabbitMQ](https://www.rabbitmq.com/) or [IHostedService](https://www.stevejgordon.co.uk/asp-net-core-2-ihostedservice) from ASP.NET Core 2.0 to do fire and forget task that kick in after a request has completed.
Upvotes: 0 <issue_comment>username_2: If you use Db in project, you can use Hangfire It is easy to use background process manager. <https://www.hangfire.io/>
you can use it very easyly like `BackgroundJob.Enqueue(() => DoWork(num));`
Upvotes: 0 <issue_comment>username_3: Use a background queue sometimes is overkill.
There are a number of sites showing a way to do when you need to access the database context. The problem of Task.Run in a controller, is that the spawned task cannot access the same context as the controller uses as it may (probably will) get disposed before that Task accesses it.
You can get round this by ensuring that the sub task only references Dependencies it knows will stay alive,
either by using a
[singleton service](https://anduin.aiursoft.com/post/2020/10/14/fire-and-forget-in-aspnet-core-with-dependency-alive) or better for database context, using the [IServiceScopeFactory](https://michaelceber.medium.com/net-core-web-api-how-to-correctly-fire-and-forget-database-commands-from-controller-end-point-cad7a879630) .
The crux of this is to create a seperate dependency that can handle your database context or Repository. This can be injected into your controller as normal.
```
public void Execute(Func databaseWork)
{
// Fire off the task, but don't await the result
Task.Run(async () =>
{
// Exceptions must be caught
try
{
using var scope = \_serviceScopeFactory.CreateScope();
var repository = scope.ServiceProvider.GetRequiredService();
await databaseWork(repository);
}
catch (Exception e)
{
Console.WriteLine(e);
}
});
}
```
Then call this from your controller such as
```
// Delegate the slow task another task on the threadpool
_fireForgetRepositoryHandler.Execute(async repository =>
{
// Will receive its own scoped repository on the executing task
await repository.DoLOngRunningTaskAsync();;
});
```
Upvotes: 2 <issue_comment>username_4: **Note:** This builds upon username_2's answer.
Hangfire can be used, but no actual database is required because it can work with [memory storage](https://github.com/perrich/Hangfire.MemoryStorage):
```
services.AddHangfire(opt => opt.UseMemoryStorage());
JobStorage.Current = new MemoryStorage();
```
While it has some overhead, Hangfire allows managing these jobs as opposed to having stuff running async and requiring custom code for simple things like run time, unhandled exceptions, custom code for DI support.
Credit: [Codidact](https://software.codidact.com/posts/285533/)
Upvotes: 1
|
2018/03/16
| 1,197 | 4,604 |
<issue_start>username_0: [](https://i.stack.imgur.com/tCv6i.png)
I am trying to make a view controller that has a scroll view and text view. I want my scrolling is enabled only when the text content of the text view is a lot. because it seems ugly if I can scroll my view controller even though the text content in the text view is not to much like the picture below
[](https://i.stack.imgur.com/fCwyz.png)
as we can see there is a big gap to the bottom of view controller. this is the code I use for this VC
```
import UIKit
class NotificationDetailVC: UIViewController {
@IBOutlet weak var notificationTitleLabel: UILabel!
@IBOutlet weak var notificationImage: UIImageView!
@IBOutlet weak var notificationDateLabel: UILabel!
@IBOutlet weak var notificationContentTextView: UITextView!
var dataNotification : [String:Any]?
override func viewDidLoad() {
super.viewDidLoad()
updateUI()
updateReadStatusInUserDefault()
}
@IBAction func imageDidTapped(_ sender: Any) {
let storyboard = UIStoryboard(name: "Main", bundle: nil)
let popup = storyboard.instantiateViewController(withIdentifier: "imageDisplay") as! ReusableImageDisplayVC
popup.photo = notificationImage.image
self.present(popup, animated: true, completion: nil)
}
func updateUI() {
guard let dataNotification = dataNotification else {return}
notificationTitleLabel.text = dataNotification["Judul"] as? String
notificationContentTextView.text = dataNotification["Keterangan"] as? String
guard let notifDate = dataNotification["TglNotif"] as? String else {return}
notificationDateLabel.text = DateTimeService.changeFormat(of: notifDate, toFormat: "d MMM YYY")
let imagePath = dataNotification["Photo"] as? String
guard let encodedImagePath = imagePath?.addingPercentEncoding(withAllowedCharacters: CharacterSet.urlQueryAllowed) else {return}
if imagePath != nil {
if imagePath!.isEmpty {
notificationImage.removeFromSuperview()
} else {
notificationImage.sd_setImage(with: URL(string: encodedImagePath), placeholderImage: UIImage(named: "3-2 Img"), options: [.continueInBackground, .progressiveDownload])
}
} else {
notificationImage.removeFromSuperview()
}
}
func updateReadStatusInUserDefault() {
guard let dataNotification = dataNotification,
let notifID = dataNotification["notif_id"] as? Int else {return}
guard var readNotificationStatusUserDefault = UserDefaults.standard.object(forKey: "readNotification") as? [String:String] else {return}
readNotificationStatusUserDefault["\(notifID)"] = "1"
UserDefaults.standard.set(readNotificationStatusUserDefault, forKey: "readNotification")
UserDefaults.standard.synchronize()
}
}
```<issue_comment>username_1: Depending upon the `contentSize` of text view, adjust the scrollable property of scroll view.
After setting the text on textview, check as below:-
```
let height = self.notificationContentTextView.contentSize.height
if(height > scrollView.frame.size.height){
scrollView.isScrollEnabled == true
}else{
scrollView.isScrollEnabled == false
}
```
Upvotes: -1 <issue_comment>username_2: First you need to find size of text of your **textview**,
you can find size of text using :
```
func rectForText(text: String, font: UIFont, maxSize: CGSize) -> CGSize
{
let attrString = NSAttributedString.init(string: yourtext, attributes: [NSAttributedStringKey.font:font])
let rect = attrString.boundingRect(with: maxSize, options: NSStringDrawingOptions.usesLineFragmentOrigin, context: nil)
let size = CGSize(width: rect.size.width, height: rect.size.height)//(, )
return size
}
```
after that set size of your **textview** according to your text size :
```
let contentSize : CGSize = rectForText(text: str, font:UIFont.boldSystemFont(ofSize: 15.0) , maxSize : CGSize(width: 200 * (SCREEN_WIDTH / 320), height: 20000) )
```
and set frame of your textview
```
var frame = self.myTextView.frame
frame.size.height = contentSize.height
self.myTextView.frame = frame
```
after set set contentsize of your scrollview
```
scrollView.contentSize = CGSize(width: self.view.frame.size.width, height: myTextView.frame.size.height + 8)
```
Hope this help! Happy coding :)
Upvotes: 3 [selected_answer]
|
2018/03/16
| 876 | 3,819 |
<issue_start>username_0: Recently I took over an android project which is built on top of MVP. While simple screens are quite straight forward and easy to read and maintain, the more complex parts of the app are not. Multiple inheritance levels have caused me days of switching between classes, trying to find out how the information flow is actually working.
Here one example of the more problematic hierarchies:
[](https://i.stack.imgur.com/MEFDR.png)
Since we use MVP, naturally there is another presenter class and another view class for each of the classes in the diagram.
So I did some research and found this article: [Composition vs Inheritance in MVP](https://stackoverflow.com/questions/14544504/composition-vs-inheritance-in-mvp)
and it's basically saying that composition should be favoured over inheritance in this situation.
What it isn't saying is how to apply that in android. I thought about it for a while, but can't come up with a nice pattern. I could do custom views, but how would they use presenters in the end?<issue_comment>username_1: I've been in a similar situation. What I ended up doing was taking out functionality from "Base" into separate classes then use it as composition. It was also related to using Maps in a similar structure.
You can create a new class `MyAppMapView`, it could extend a `FrameLayout` (or whatever best for your layout). This can contain all of your Map related code (including MapView), you can have custom MapView related functions here like `onStart()`, `onResume()`, `onPause()`, `onStop()`. Also, you can put everything related to maps here like markers, lines etc.
Once you have that in place you can just use `new MyAppMapView()` (or add it in xml using com.example.MyAppMapView) in your `MapViewSimpleFragment` and `MapViewDetailsFragment` and since both of these classes would be ***composing*** your custom `MyAppMapView` class, you can call all map related functions from Fragments like drawing markers, lines, onStart() etc.
Upvotes: 2 <issue_comment>username_2: Inheritance, although quite powerful, is quite easy to misuse, and when the inevitable happens, i.e change in requirements, inheritance is highly susceptible to break the [**open-closed principle**](https://en.wikipedia.org/wiki/Open/closed_principle) due to its inflexibility. The programmer is bound to modify the existing classes which in turn breaks the client code.
This is the reason why composition is usually favored over inheritance. It provides more flexibility with changing requirements.
This design principle says exactly that:
* **Encapsulate what varies**.
Identify the aspect of your code that varies and separate them from
what stays the same. This way we can alter them without affecting the rest of the code.
Moving on to your problem, the first thing that came to my mind after I read your question was: Why not use [**Strategy Pattern!**](https://en.wikipedia.org/wiki/Strategy_pattern)
The approach you can take is:
`BaseMapViewFragment` will contain all the code that is common to all derived classes. Have separate Interfaces for different kinds of behaviors(things that vary). You can create concrete behavior classes according to your requirements. Introduce those behavior interfaces as class fields of `BaseMapViewFragment`. Now classes that extend the `BaseMapViewFragment` will initialize the required behaviors with concrete behavior classes.
Whatever I said in the above paragraph might be confusing (my english isn't that good either :D), but I just explained the working of the Strategy pattern, nothing more.
There is another design principle at play here:
* **Program to an interface, not an implementation**. The Strategy pattern uses it to implement the code that varies.
Upvotes: 3
|
2018/03/16
| 686 | 2,429 |
<issue_start>username_0: So I'm getting this error
```
~/projects/personal-projects/react/myapp ⌚ 18:12:24
$ react-native run-android --variant=release
Scanning folders for symlinks in /Users/user/projects/personal-projects/react/myapp/node_modules (22ms)
JS server already running.
Building and installing the app on the device (cd android && ./gradlew installRelease)...
FAILURE: Build failed with an exception.
* What went wrong:
A problem occurred configuring project ':app'.
> SDK location not found. Define location with sdk.dir in the local.properties file or with an ANDROID_HOME environment variable.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
Total time: 14.706 secs
Could not install the app on the device, read the error above for details.
Make sure you have an Android emulator running or a device connected and have
set up your Android development environment:
https://facebook.github.io/react-native/docs/getting-started.html
```
I already have my android sdk installed and I also have java installed.
this is what is in my `.bach_profile`
```
export ANDROID_HOME=$HOME/Library/Android/sdk
export PATH=$PATH:$ANDROID_HOME/tools
export PATH=$PATH:$ANDROID_HOME/platform-tools
export JAVA_HOME=$Home/Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home
```
I've tried adding a local.properties file and added sdk.dir pointing to my Android/sdk folder. But still this error persists. How do I fix this?<issue_comment>username_1: Please make file `local.properties` in folder `android` then add the following line on file
```
sdk.dir=/Users/apple/Library/Android/sdk(Please add your sdk path here)
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You need to add ANDROID\_HOME, JAVA\_HOME variables in environmental variables and
tools and platform-tools in path of system to run android apk.
Setting [ANDROID\_HOME](https://stackoverflow.com/a/36834833/3338287) variable in environmental variables.
Setting [JAVA\_HOME](https://stackoverflow.com/a/6521412/3338287) variable in environmental variables.
Upvotes: 1 <issue_comment>username_3: So the solution to my problem was:
1) add my ANDROID\_HOME and JAVA\_HOME paths in my .bash\_profile
* Solution by @rana\_sadam
2) Create `local.properties` and add `sdk.dir=` inside the android folder.
* Solution by @nisarg Thakkar
Upvotes: 0
|
2018/03/16
| 600 | 1,999 |
<issue_start>username_0: I have a "users" table as below
```
User_Id User_Name Case_Id Create_Date Close_date
--------------------------------------------------------
1 abc 65473 2018-03-14 2018-03-15
2 xyz 43526 2018-03-14 NULL
3 gst 49088 2018-03-11 NULL
```
I want to display the output as below , user who has no cases open on current day (2018-03-16) should also be displayed
```
User_ID User_Name Case_Count
1 abc 0
2 xyz 1
3 gst 1
```
I have written below query to achieve this
```
select user_name,isnull(count(CASE_ID),'0') as case_id from users where CLOSE_DATE is NULL group by user_name
```
But I am not getting the output i want, instead i am getting this
```
User_ID User_Name Case_Count
2 xyz 1
3 gst 1
```
Can someone please help me with this as I am new to SQL Server<issue_comment>username_1: Since you have `where CLOSE_DATE is NULL` you won't get the first row since it has a value for close date.
Change your query to the following to make it work:
```
select user_name,count(isnull(CASE_ID),'0') as case_id from users group by user_name
```
Upvotes: 0 <issue_comment>username_2: Sounds like you want a conditional SUM of the number of cases that are still open?
In which case, you only want to include counts of cases where the `CLOSE_DATE IS NULL`, but don't want to ignore rows where the `CLOSE_DATE IS NOT NULL`, just show their total as 0:
```
SELECT
user_name,
SUM(CASE WHEN CLOSE_DATE IS NULL THEN 1 ELSE 0 END) AS CASE_COUNT
FROM users
GROUP BY user_name;
```
**Edit:**
You appear to have user\_id in your result set, but not source query, so probably want to include that...
```
SELECT
user_id,
user_name,
SUM(CASE WHEN CLOSE_DATE IS NULL THEN 1 ELSE 0 END) AS CASE_COUNT
FROM users
GROUP BY user_id, user_name;
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 739 | 3,257 |
<issue_start>username_0: Im Using the gcp python API.
GOOGLE\_APPLICATION\_CREDENTIALS environment variable wants the path to a gcp accounts json key.
Is there another variable that can accept the contents of that file instead of the path? This would be convenient.<issue_comment>username_1: I recommend you to not rely on `GOOGLE_APPLICATION_CREDENTIALS`, but rather use the client libraries to read credentials from the runtime environment. This greatly simplifies key distribution and management, as well as security.
When you create App Engine/GKE/GCE instances, you control which service account is installed to the respective instance's metadata server (it's an option to the `create` commands). The client libraries will by default look up the credentials there. See [this page](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances) and [this page](https://cloud.google.com/compute/docs/storing-retrieving-metadata) for relevant documentation regarding the metadata server. Just remember to specify the key when you create your environments, it will by default use the compute service's service account key.
For your localhost environment, you manage the default application credentials with `gcloud auth application-default login`. The proper way to use service accounts on localhost is not to download the key, but rather grant your user "act as" privileges for the service account. This way your users always only authenticate as themselves, and the service account keys don't have to be distributed to everyone. If someone leave your company, the only secret they've had access to is their own user.
See [this page](https://cloud.google.com/docs/authentication/production#auth-cloud-implicit-python) in the documentation for authenticating with Python.
Upvotes: 1 <issue_comment>username_2: There is no other variable that can be used instead of the GOOGLE\_APPLICATION\_CREDENTIALS. Most Google client code use the GOOGLE\_APPLICATION\_CREDENTIALS environment variable to designate a path to the credentials JSON file to open. And [this setup](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application) is used to determine the appropriate Service accounts to be used when your application code are deployed to the Google Cloud Platform.
As suggested by red888, alternatively to using the gcp python API, you can use the [Python Client Library](https://cloud.google.com/compute/docs/tutorials/python-guide)
Upvotes: 1 <issue_comment>username_3: According to the [google-oauth docs](https://google-auth.readthedocs.io/en/master/reference/google.oauth2.service_account.html) for the service account module, you can create the credentials with one of the helper constructors for a file or also for already loaded json data.
To create credentials using a Google service account private key JSON file:
```
credentials = service_account.Credentials.from_service_account_file('service-account.json')
```
Or if you already have the content of the service account file loaded from somewhere:
```
service_account_info = json.load(open('service_account.json'))
credentials = service_account.Credentials.from_service_account_info(
service_account_info)
```
Upvotes: 2
|
2018/03/16
| 626 | 2,189 |
<issue_start>username_0: I need fetch the data from an API and display in an list.There is no error while i run the code. But the data are also not been displayed.
```
import React, { Component } from 'react';
import {Container, List,Text, StyleProvider} from 'native-base';
import getTheme from './native-base-theme/components';
import material from './native-base-theme/variables/material';
export default class App extends Component {
state = {
data: []
};
componentDidMount() {
this.fetchData();
}
fetchData = async () => {
const response = await fetch("https://randomuser.me/api?results=10");
const json = await response.json();
this.setState({ data: json.results });
};
render() {
console.log(this.state.data);
return (
i}
renderItem={({ item }) =>
{`${item.name.first} ${item.name.last}`}
}
/>
);}}
```
The list without the api data is working fine<issue_comment>username_1: You need to set a loading state while the data is loading & when its loaded then display it
Something like this
```
import React, { Component } from "react";
import { Container, List, Text, StyleProvider } from "native-base";
import getTheme from "./native-base-theme/components";
import material from "./native-base-theme/variables/material";
export default class App extends Component {
state = {
data: []
};
componentDidMount() {
this.fetchData();
}
fetchData = async () => {
this.setState({ loading: true });
const response = await fetch("https://randomuser.me/api?results=10");
const json = await response.json();
this.setState({ data: json.results, loading: false });
};
render() {
return (
{this.state.loading ? (
Loading
) : (
i}
renderItem={({ item }) => (
{`${item.name.first} ${item.name.last}`}
)}
/>
)}
);
}
}
```
Upvotes: 0 <issue_comment>username_2: ```
i}
key={this.state.data} /// Please give key here
renderItem={({ item }) =>
{`${item.name.first} ${item.name.last}`}
}
/>
```
Please give key to you code
Upvotes: 0 <issue_comment>username_3: You must use List component like this : [native-base dynamic list](https://docs.nativebase.io/Components.html#dynamic-list-headref)
Upvotes: 1
|
2018/03/16
| 956 | 3,681 |
<issue_start>username_0: In package com.example.project.packageA I have a class which extends the JobIntentService defines as follows:
```
public class MyService extends JobIntentService {
static final int JOB_ID = 1000;
static final String TAG =MyService.class.getSimpleName();
public static void enqueueWork(Context context, Intent work) {
enqueueWork(context, MyService.class, JOB_ID, work);
}
@Override
public int onStartCommand(@Nullable Intent intent, int flags, int startId) {
return super.onStartCommand(intent, flags, startId);
}
@Override
public void onCreate() {
super.onCreate();
}
@Override
public void onDestroy() {
super.onDestroy();
}
/**
* We have received work to do. The system or framework is already
* holding a wake lock for us at this point, so we can just go.
*/
@Override
protected void onHandleWork(Intent intent) {
Log.i(TAG, "Started onHandleWork");
String value = intent.getExtras().getString("Key");
Log.i(TAG, value);
}
```
}
In another package:
com.example.project.packageB; I want to call this service to start in the background, so I did it as:
```
Intent it = new Intent();
it.setComponent(new ComponentName("com.example.project", "com.example.project.packageA.MyService"));
it.putExtra("Key", "Value");
MyService.enqueueWork(context, it);
Log.d(TAG, "Call successful");
```
I also included in the manifest file the following permission:
```
```
However, when I run my program it doesn't look like the service is started. I can see the log message "Call successful", but I can't see the log messages from the onHandleWork function. Am I starting the service in a wrong way?<issue_comment>username_1: You don't need to use a component name within your intent. Try removing this line:
```
it.setComponent(new ComponentName("com.example.project", "com.example.project.packageA.MyService"));
```
Also your code should be like this:
```
Intent it = new Intent();
it.putExtra("Key", "Value");
MyService.enqueueWork(context, it);
Log.d(TAG, "Call successful");
```
Upvotes: 0 <issue_comment>username_2: Consider starting it like this;
```
Intent it = new Intent(context, MyService.class);
it.putExtra("key", "value");
MyService.enqueueWork(context, it);
```
Upvotes: 0 <issue_comment>username_3: In later versions of Android, Google introduced Doze feature to optimize battery life of the device. (not sure from which version they introduced it) you need to add this to your intent to ignore battery optimizations
//disabling battery optimizations
```
Intent intent = new Intent(this,SeriesService.class);
String packageName = getPackageName();
PowerManager pm = (PowerManager) getSystemService(Context.POWER_SERVICE);
if (pm.isIgnoringBatteryOptimizations(packageName))
intent.setAction(Settings.ACTION_IGNORE_BATTERY_OPTIMIZATION_SETTINGS);
else { intent.setAction(Settings.ACTION_REQUEST_IGNORE_BATTERY_OPTIMIZATIONS);
intent.setData(Uri.parse("package:" + packageName));
}
startService(intent);
```
I tried this on my OnePlus 5 (running Android pie) and it works
**EDIT**
Just in case I'm adding how I'm calling to my service after I already started it
```
Context con = getView().getContext();//activity context
Intent intent = new Intent(con,SeriesService.class);
intent.setAction(SOME_ACTION);
intent.putExtra(KEY,data);
SeriesService.enqueueWork(con,intent);
```
Upvotes: 1 <issue_comment>username_4: If you override the method onCreate(), the method onHandleWork() is never called. Just erase the onCreate() method.
Upvotes: 0
|
2018/03/16
| 883 | 2,816 |
<issue_start>username_0: I am wondering if it's possible to initialize a `std::array` of objects with an implicitly deleted default constructor, without knowing a priori the size of the array because it's a template argument and so having lost the possibility of using an initializer list. Code follows, it breaks with a "call to implicitly-deleted default constructor of `std::array`"
```
struct A {
A (int b, int c) : mb(b), mc(c) { }
int mb;
int mc;
};
template
struct B {
B (int b, int c) :
// <- how to initialize mAs here?
{ }
std::array mAs;
};
B<3> inst(1,1);
```
edit: I'd like to initialize all the `A`'s of `mAs` to `A{1,1}`<issue_comment>username_1: You may use delegating constructors and pack expansion
```
struct A {
A(int b, int c) : b(b), c(c) { }
A(const A&) = delete;
A(A&&) = delete;
int b;
int c;
};
template
struct B {
B (int b, int c) : B(b, c, std::make\_index\_sequence{}) {}
template
B (int b, int c, std::index\_sequence) :
arr{(Is, A{b, c})...}
{}
std::array arr;
};
```
[Live](https://godbolt.org/g/4dz3sp)
Note if the move and copy constructors are deleted, this will only work after C++17.
Upvotes: 3 <issue_comment>username_2: For both C++11 and C++14 (i.e.: pre-C++17) what you want can be achieved by means of template metaprogramming.
You could declare the following helper class template, `array_maker<>`, which has a `static` member function template, `make_array`, that calls itself recursively:
```
template
struct array\_maker {
template
static std::array make\_array(const T& v, Ts...tail) {
return array\_maker::make\_array(v, v, tail...);
}
};
```
Then, specialize this class template for the case `Idx` equal to `1`, i.e.: the *base case* of the recursion:
```
template
struct array\_maker {
template
static std::array make\_array(const T& v, Ts... tail) {
return std::array{v, tail...};
}
};
```
Finally, it can be used in the constructor of your template this way:
```
template
struct B {
B (int b, int c) : mAs{array\_maker::make\_array(A{b,c})}
{}
std::array mAs;
};
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: here's a solution I came up with (requires c++17)
```
template
constexpr auto make\_array(T t, Ts... ts)
{
if constexpr (index\_t <= 1) {
return std::array {t, ts...};
} else {
return make\_array(t, t, ts...);
}
}
```
This is a modification of one of the previous solution that had an array maker struct. This does the same just in a more concise form.
It takes in a single item and keeps doubling down on that item until it reaches a depth of 1,then returns an array from the unfolding. I didn't realize that you a fold expression can be used even though no arguments are passed through it. Although I know this implicitly from things like printf.
Upvotes: 1
|
2018/03/16
| 772 | 2,505 |
<issue_start>username_0: I was thinking if this is possible:
I got a PS-Form with several buttons and whether or not you can bind the buttons to other PS files?
Example:
The main form has 3 buttons and one of the buttons is called "Create Ad User".
When I click on the button goes to Ad-Form.Ps, gets the content and generates the form.<issue_comment>username_1: You may use delegating constructors and pack expansion
```
struct A {
A(int b, int c) : b(b), c(c) { }
A(const A&) = delete;
A(A&&) = delete;
int b;
int c;
};
template
struct B {
B (int b, int c) : B(b, c, std::make\_index\_sequence{}) {}
template
B (int b, int c, std::index\_sequence) :
arr{(Is, A{b, c})...}
{}
std::array arr;
};
```
[Live](https://godbolt.org/g/4dz3sp)
Note if the move and copy constructors are deleted, this will only work after C++17.
Upvotes: 3 <issue_comment>username_2: For both C++11 and C++14 (i.e.: pre-C++17) what you want can be achieved by means of template metaprogramming.
You could declare the following helper class template, `array_maker<>`, which has a `static` member function template, `make_array`, that calls itself recursively:
```
template
struct array\_maker {
template
static std::array make\_array(const T& v, Ts...tail) {
return array\_maker::make\_array(v, v, tail...);
}
};
```
Then, specialize this class template for the case `Idx` equal to `1`, i.e.: the *base case* of the recursion:
```
template
struct array\_maker {
template
static std::array make\_array(const T& v, Ts... tail) {
return std::array{v, tail...};
}
};
```
Finally, it can be used in the constructor of your template this way:
```
template
struct B {
B (int b, int c) : mAs{array\_maker::make\_array(A{b,c})}
{}
std::array mAs;
};
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: here's a solution I came up with (requires c++17)
```
template
constexpr auto make\_array(T t, Ts... ts)
{
if constexpr (index\_t <= 1) {
return std::array {t, ts...};
} else {
return make\_array(t, t, ts...);
}
}
```
This is a modification of one of the previous solution that had an array maker struct. This does the same just in a more concise form.
It takes in a single item and keeps doubling down on that item until it reaches a depth of 1,then returns an array from the unfolding. I didn't realize that you a fold expression can be used even though no arguments are passed through it. Although I know this implicitly from things like printf.
Upvotes: 1
|
2018/03/16
| 496 | 1,986 |
<issue_start>username_0: There are around 5 projects under Hyperledger. I don't find any material which talks about distinctive features of every framework, how each framework is different from each other, when to use each? It would be of great help if someone provide that information<issue_comment>username_1: Hyperledger is a distributed ledger technology implementation of the Linux foundation with a some high-profile partners (IBM and Intel among others). The 5 projects under the Hyperledger banner are: fabric, sawtooth, indy, burrow, and Iroha. [Here's a good article explaining the differences](https://www.sdxcentral.com/articles/news/whats-the-difference-between-the-5-hyperledger-blockchain-projects/2017/09/)
In a nutshell fabric is the actual blockchain component that allows for 'plug-ins' of different types of consensus mechanisms. Consensus mechanisms are at the heart of blockchain and Sawtooth uses POET, proof of elapsed time, in an effort to speed up the process while making it less resource intensive (e.g., less electricity than BTCs proof of work (POW). Indy is an attempt to tackle the problem of Identity management, burrow is a smart contract implementation and Iroha folds in mobility components. [To learn more consider this edx course from the Linux foundation](https://www.edx.org/course/blockchain-business-introduction-linuxfoundationx-lfs171x)
It will answer many of your questions.
Upvotes: 3 [selected_answer]<issue_comment>username_2: As of 2019 there are more frameworks/tools under Hyperledger umbrella,
**Update Aug 29th 2019: Hyperledger Composer has been deprecated**
Hyperledger Composer has been deprecated and they suggested everyone to use Fabric instead.
Useful links:
<https://www.hyperledger.org/projects>
<https://101blockchains.com/hyperledger-blockchain/>
Below image is taken from 101blockchain.com
[](https://i.stack.imgur.com/Up9W2.png)
Upvotes: 0
|
2018/03/16
| 1,031 | 3,293 |
<issue_start>username_0: A small dataframe with a two level multiindex and one column. The second column(level 1) of the index will sort in alphabetical order putting 'Four' before 'Three'.
```
import pandas as pd
df = pd.DataFrame({'A':[1,1,2,2],
'B':['One','Two','Three', 'Four'],
'X':[1,2,3,4]},
index=range(4)).set_index(['A','B']).sort_index()
df
X
A B
1 One 1
Two 2
2 Four 4
Three 3
```
Clearly the second level of the index (B) is in alphabetical order so this can be replaced with a categorical index to force the correct ordering.
```
df.index.set_levels(pd.CategoricalIndex(df.index.levels[1],
categories=['One','Two','Three', 'Four'], ordered=True),
level=1, inplace=True)
```
With this done inspecting the index shows that level 1 is indeed a categorical index. But sorting the index does not put the rows in the desired order.
```
df.sort_index()
X
A B
1 One 1
Two 2
2 Four 4
Three 3
```
Note: If the the dataframe has a simple index of 1 level only this works as expected.<issue_comment>username_1: I managed to get this by setting the index after the dataframe has been created - not sure if this is the best answer but it's an answer:
```
df = pd.DataFrame({'A':[1,1,2,2],
'B':['One','Two','Three', 'Four'],
'X':[1,2,3,4]})
df = df.set_index(['A', pd.CategoricalIndex(df['B'], categories=['One','Two','Three', 'Four'], ordered=True)])
del df['B']
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you check the index (level=1) after defining the dataframe and using `set_levels()`, you will see that its values are sorted lexicographically.
```
print(df.index.levels[1])
```
output:
```
CategoricalIndex(['Four', 'One', 'Three', 'Two'], categories=['One', 'Two', 'Three', 'Four'], ordered=True, dtype='category', name='B')
```
You can see that `['Four', 'One', 'Three', 'Two']` is lexicographically ordered.
Categorical index sorting fails with lexicographically ordered data. Several tricks can be used to solve this problem in these cases.
1. You can first reset the index using reset\_index(). Then you can make a CategoricalIndex and use it to categorize the column.
```
order = ['One','Two','Three', 'Four']
df.reset_index(inplace=True)
df['B'] = pd.CategoricalIndex(df['B'], order, ordered=True)
df = df.sort_values(['A','B']).set_index(['A','B'])
print(df.index.levels[1])
df
output:
CategoricalIndex(['One', 'Two', 'Three', 'Four'], categories=['One', 'Two', 'Three', 'Four'], ordered=True, dtype='category', name='B')
X
A B
1 One 1
Two 2
2 Three 3
Four 4
```
You can see that `['One', 'Two', 'Three', 'Four']` is lexicographically ordered.
2. You can first create a `CategoricalIndex` and assign it to the level\_1 of the MultiIndex. Afterward, you need to eliminate the lexicographical sort from the one-level index. You can use two `swaplevel()` -> `sort_index()` -> `swaplevel()` here as a trick.
```
df.index = df.index.set_levels(pd.CategoricalIndex(df.index.levels[1], order, ordered=True), level=1)
df.swaplevel(1,0).sort_index(level=0).swaplevel(1,0)
df
output:
X
A B
1 One 1
Two 2
2 Three 3
Four 4
```
3. Also you can first setting types then setting index.
Upvotes: 0
|
2018/03/16
| 1,867 | 6,276 |
<issue_start>username_0: On excel I consult a table from the MySQL server and then modify the format of how the data is shown. I'm currently changing the SQL so the data comes already the way I want and one part is like this:
There's 25 columns named operator1-5lot1-5 (1-5 as in 1 to 5) like this:
`operator1lot1`, `operator2lot1`, `operator3lot1`, `operator4lot1`, `operator5lot1`, `operator1lot2`, `operator2lot2`, `operator3lot2`, `operator4lot2`, `operator5lot2`, `operator1lot3`, `operator2lot3`, `operator3lot3`, `operator4lot3`, `operator5lot3`, `operator1lot4`, `operator2lot4`, `operator3lot4`, `operator4lot4`, `operator5lot4`, `operator1lot5`, `operator2lot5`, `operator3lot5`, `operator4lot5`, `operator5lot5`.
`operator1lot1` has always a value, then each next operator can have a value if the ones behind also have (if `operator4lot1` has a value so do operators 1 to 3 of lot 1) and the same way to operator1lotX (if `operator1lot3` has a value so does operator 1 of lots 1 and 2 but doesn't mean that operators 2 to 5 of lots 1 and 2 have a value).
(not having a value means it's `NULL`)
Currently on excel I `SELECT` all the 25 columns, then join the values to the left side if there's space (if `operator2lot3` is `NULL` the value of `operator3lot3` is moved to there, to the left) then I delete the columns that are empty (that have all its values `NULL`) from the right side of the table to the left and finally I rename the columns from the left to the right to `1` to `x` (2 <= x <= 25).
Here's an example to explain it better. The database table looks like this:
[](https://i.stack.imgur.com/iVpcA.png)
And on Excel it formats to this:
[](https://i.stack.imgur.com/9ODJk.png)
My question is if it is possible to format the data like that in the SQL directly. I don't know many SQL commands so the closest I can find that doesn't exactly do what I want is [CONCAT\_WS](https://learn.microsoft.com/en-us/sql/t-sql/functions/concat-ws-transact-sql) and [AS](https://www.w3schools.com/sql/sql_alias.asp).<issue_comment>username_1: As I've stated before this is probably easier to do with the script generating the actual Excel file.
Anyway I do have a solution, it does not look pretty and I'm not sure this is the ideal approach but here goes.
```
SELECT
CASE WHEN (char_length(`combinedCols`) - char_length(replace(`combinedCols`, ',', '')) + 1) > 0
THEN substring_index(`combinedCols`,',',1 )
ELSE NULL
END AS col1,
CASE WHEN (char_length(`combinedCols`) - char_length(replace(`combinedCols`, ',', '')) + 1) > 1
THEN substring_index(substring_index(`combinedCols`,',',2 ),',',-1)
ELSE NULL
END AS col2,
CASE WHEN (char_length(`combinedCols`) - char_length(replace(`combinedCols`, ',', '')) + 1) > 2
THEN substring_index(substring_index(`combinedCols`,',',3 ),',',-1)
ELSE NULL
END AS col3,
CASE WHEN (char_length(`combinedCols`) - char_length(replace(`combinedCols`, ',', '')) + 1) > 3
THEN substring_index(substring_index(`combinedCols`,',',4 ),',',-1)
ELSE NULL
END AS col4,
CASE WHEN (char_length(`combinedCols`) - char_length(replace(`combinedCols`, ',', '')) + 1) > 4
THEN substring_index(substring_index(`combinedCols`,',',5 ),',',-1)
ELSE NULL
END AS col5,
CASE WHEN (char_length(`combinedCols`) - char_length(replace(`combinedCols`, ',', '')) + 1) > 5
THEN substring_index(substring_index(`combinedCols`,',',6 ),',',-1)
ELSE NULL
END AS col6,
CASE WHEN (char_length(`combinedCols`) - char_length(replace(`combinedCols`, ',', '')) + 1) > 6
THEN substring_index(substring_index(`combinedCols`,',',7 ),',',-1)
ELSE NULL
END AS col7,
CASE WHEN (char_length(`combinedCols`) - char_length(replace(`combinedCols`, ',', '')) + 1) > 7
THEN substring_index(substring_index(`combinedCols`,',',8 ),',',-1)
ELSE NULL
END AS col8,
CASE WHEN (char_length(`combinedCols`) - char_length(replace(`combinedCols`, ',', '')) + 1) > 8
THEN substring_index(substring_index(`combinedCols`,',',9 ),',',-1)
ELSE NULL
END AS col9
FROM
(SELECT CONCAT_WS(',',`col1`,`col2`,`col3`,`col4`,`col5`,`col6`,`col7`,`col8`,`col9`) AS `combinedCols` FROM `tableName`) AS `tableNameAlias`;
```
For my example I've only put it 9 columns but extending this to 25 or more shouldn't be to hard once you understand what I've done.
### Step 1
```
SELECT CONCAT_WS(',',`col1`,`col2`,`col3`,`col4`,`col5`,`col6`,`col7`,`col8`,`col9`) AS `combinedCols` FROM `tableName`
```
First we need to combine all columns, for that we can use `CONCAT_WS`, the first parameter of this function is the glue we're going to use in this case `,`.
### Step 2
```
substring_index(`combinedCols`,',',1 )
```
Now we need to somehow explode these values on the `,` using `SUBSTRING_INDEX` we can separate the value on a specified character end on the specified occurrence.
### Step 3
```
substring_index(substring_index(`combinedCols`,',',2 ),',',-1)
```
After passing the first occurrence we need to get fetch the last occurrence of the, in this case, second occurrence therefore we execute the same function again but now with a negative value of `-1`.
### Step 4
```
CASE WHEN (char_length(`combinedCols`) - char_length(replace(`combinedCols`, ',', '')) + 1) > 1
```
Th last part is checking whether there are actually enough values to check for. This part counts the amount of used `,` and if sufficient executes the code otherwise simply sets a value of `NULL`
Well then hope this solves your problem.
Upvotes: 2 [selected_answer]<issue_comment>username_2: By way of example, and considering only the first 3 columns, and first 3 rows of data, a normalized schema might consist of 4 or 5 columns, as follows:
```
id operator lot entry value
1 1 1 1 117
2 1 1 2 117
3 1 1 3 129
4 2 1 1 319
5 2 1 2 319
6 3 1 2 530
```
The surrogate primary key id is not strictly necessary, because a satisfactory PK exists in the next 3 columns.
Upvotes: 0
|
2018/03/16
| 1,458 | 6,149 |
<issue_start>username_0: I'm currently using **git** to deploy my applications in my different hosts. My codebase being mounted into running **Docker images**:
**docker-compose.yml**
```yaml
services:
app:
image: my.registry.net:5000/my/app
volumes:
- .:/app
# [...]
```
So my deployments are like that:
```
$ git pull
$ docker-compose build && docker-compose up -d
$ docker-compose exec app some_dependencies_installation_command
$ docker-compose exec app some_cache_clearing_command
```
That seems **wrong** to me for many reasons that you can imagine. Though if something is going wrong I still can **revert** to my previous git release tag.
I'd like to simplify my deployments by baking my codebase into the image, so I **don't use git anymore** for deployments but rather rely on **image pull**. Don't hesitate to tell me if you don't suggest that and why.
I'm tempted to simply add my codebase to the end of my image and mounting just the required stuff in the host:
**Dockerfile**
```sh
# [...]
ADD . /app
```
**docker-compose.prod.yml**
```yaml
services:
app:
image: my.registry.net:5000/my/app
volumes:
- ./config.yml:/app/config.yml
- ./logs:/app/logs
- ./cache:/app/cache
- ./sessions:/app/sessions
# [...]
```
So once built, I should only have to deploy my images with:
```
$ docker-compose pull && docker-compose up -d
```
Would it be the good way to go ? How can I version my image so I can **revert** it if something goes wrong (since the `Dockerfile` actually **never changes**) ? Also, isn't relying on one layer of the Dockerfile going to **pull the entire codebase** each time instead of only the diff (like with git) ?<issue_comment>username_1: docker-compose is one orchestration tool for docker, but there others like kubernetes or docker swarm. You should check them out for yourself, considering where you can use these tools.
To version your images you can tag them (with -t ). So you will be able to roll out specific version `my.registry.net:5000/my/app:1.1.0`, etc.
The build can be done locally on your machine or automated in a CI/CD pipeline.
Docker does not pull the codebase, only layers with files. If you compile your program, only the the binary should be added and some assets. Layers are cached and only pulled on when needed.
If you want to minimize the images in your registry, you can use multi stage builds in docker. This uses multiple stages to build the last container, which only serves your app, and therefore needs less files. E.g. you can compile your binary in one stage with javac, compile the assets in another stage with nodejs and webpack and copy those resulting files to the last container which only has the java runtime and the static, generated assets. The official docs are here: <https://docs.docker.com/develop/develop-images/multistage-build/>
Upvotes: 1 <issue_comment>username_2: You're right, your approach has a number of issues, including:
* [Your containers are not ephemeral](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#general-guidelines-and-recommendations)
* Each instance of your application has to wait for the build to complete before starting
* What happens if your build fails? Excluding manual testing, you only know for failures at "deploy time"
* The build process will produce the same artifact on each run (provided you did not change the source). Why waste time repeating this process?
Instead of baking the source code of your app inside the container and rebuilding your app on each deploy, you could just build your app (maybe inside another "build" container if you want to make the build environment portable too, i.e. a container used just for building) and add the artifacts that your build produced in another image that will be deployed in your production environment (after testing and QA, because you do have multiple environments, right? :).
Docker image tagging could work this way: build an image on each commit (maybe under the latest image tag) and on each git tag (under the corresponding image tag). Example: for each commit on your master branch you "update" `your-image:latest` and when you tag the 1.0.0 version you produce the `your-image:1.0.0`.
If you have a CI service of sorts you may also automate this procedure.
Note that using the latest tag in production may lead to unstable environments because you don't know which version you are deploying. Use another more-specific tag.
UPDATE: as others have been pointing out, putting your sources inside the container is a strategy that you can apply to the development phase. But it's not your case, because I am assuming you are talking about (pre-)production deplyoments.
Upvotes: 1 <issue_comment>username_3: Binding source code onto the container, is usually done to have a quick feedback loop while developing. This is achieved by avoiding rebuilding the image once code changes as done. This technique is mostly useful for local development purposes. For production environments this technique should not be used.
Docker images are designed to be self-contained. All the dependencies that an image needs should be packaged inside the image. Relying on bind mounts to provide the source code or dependencies is not
recommended as the resulting Docker image is not portable. Whenever you switch to a new machine, you need to transfer that source code and dependencies to that new machine.
Thus adding the source code into the image is the recommended approach. You should also benefit from the standard Docker workflow, where you build the image giving it a specific tag (usually the version), and push
it into a Docker repository. From there on, you only pull the image and start it on any machine that you have.
As for the versioning: When building the image you tag it with the source code version(or git tag or commit id).
```
docker build -t : .
```
And you also pull a specific version using `docker pull :`. Using this technique, you can always revert back to any version of the image.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,278 | 5,503 |
<issue_start>username_0: I am going to make restful server with codeigniter.
I did download Rest\_Controller.php and Format.php from github and placed in libraries folder them.
By the way, all request are not working properly put, post,delete etc.
I used postman tool.The output is always
```
{"":false}
```
Help me.
[enter image description here](https://i.stack.imgur.com/ygiS2.png)
```
php
defined('BASEPATH') OR exit('No direct script access allowed');
require(APPPATH.'/libraries/Rest_controller.php');
class Api extends REST_Controller{
public function __construct()
{
parent::__construct();
$this-load->database();
$this->load->model('book_model');
}
function put_info()
{
if($_SERVER['REQUEST_METHOD'] == 'PUT'){
echo "this is put request\n";
var_dump($this->input->input_stream());
}
else if($_SERVER['REQUEST_METHOD'] == 'POST')
echo "this is post request\n";
else if($_SERVER['REQUEST_METHOD'] == 'DELETE')
echo "this is delete request\n";
return;
}
}
?>
```<issue_comment>username_1: docker-compose is one orchestration tool for docker, but there others like kubernetes or docker swarm. You should check them out for yourself, considering where you can use these tools.
To version your images you can tag them (with -t ). So you will be able to roll out specific version `my.registry.net:5000/my/app:1.1.0`, etc.
The build can be done locally on your machine or automated in a CI/CD pipeline.
Docker does not pull the codebase, only layers with files. If you compile your program, only the the binary should be added and some assets. Layers are cached and only pulled on when needed.
If you want to minimize the images in your registry, you can use multi stage builds in docker. This uses multiple stages to build the last container, which only serves your app, and therefore needs less files. E.g. you can compile your binary in one stage with javac, compile the assets in another stage with nodejs and webpack and copy those resulting files to the last container which only has the java runtime and the static, generated assets. The official docs are here: <https://docs.docker.com/develop/develop-images/multistage-build/>
Upvotes: 1 <issue_comment>username_2: You're right, your approach has a number of issues, including:
* [Your containers are not ephemeral](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#general-guidelines-and-recommendations)
* Each instance of your application has to wait for the build to complete before starting
* What happens if your build fails? Excluding manual testing, you only know for failures at "deploy time"
* The build process will produce the same artifact on each run (provided you did not change the source). Why waste time repeating this process?
Instead of baking the source code of your app inside the container and rebuilding your app on each deploy, you could just build your app (maybe inside another "build" container if you want to make the build environment portable too, i.e. a container used just for building) and add the artifacts that your build produced in another image that will be deployed in your production environment (after testing and QA, because you do have multiple environments, right? :).
Docker image tagging could work this way: build an image on each commit (maybe under the latest image tag) and on each git tag (under the corresponding image tag). Example: for each commit on your master branch you "update" `your-image:latest` and when you tag the 1.0.0 version you produce the `your-image:1.0.0`.
If you have a CI service of sorts you may also automate this procedure.
Note that using the latest tag in production may lead to unstable environments because you don't know which version you are deploying. Use another more-specific tag.
UPDATE: as others have been pointing out, putting your sources inside the container is a strategy that you can apply to the development phase. But it's not your case, because I am assuming you are talking about (pre-)production deplyoments.
Upvotes: 1 <issue_comment>username_3: Binding source code onto the container, is usually done to have a quick feedback loop while developing. This is achieved by avoiding rebuilding the image once code changes as done. This technique is mostly useful for local development purposes. For production environments this technique should not be used.
Docker images are designed to be self-contained. All the dependencies that an image needs should be packaged inside the image. Relying on bind mounts to provide the source code or dependencies is not
recommended as the resulting Docker image is not portable. Whenever you switch to a new machine, you need to transfer that source code and dependencies to that new machine.
Thus adding the source code into the image is the recommended approach. You should also benefit from the standard Docker workflow, where you build the image giving it a specific tag (usually the version), and push
it into a Docker repository. From there on, you only pull the image and start it on any machine that you have.
As for the versioning: When building the image you tag it with the source code version(or git tag or commit id).
```
docker build -t : .
```
And you also pull a specific version using `docker pull :`. Using this technique, you can always revert back to any version of the image.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,026 | 4,606 |
<issue_start>username_0: I want to validate user phone number exists in Firebase before phone authentication. And if user exists then only OTP verification will start otherwise show error. Kindly help me out please<issue_comment>username_1: docker-compose is one orchestration tool for docker, but there others like kubernetes or docker swarm. You should check them out for yourself, considering where you can use these tools.
To version your images you can tag them (with -t ). So you will be able to roll out specific version `my.registry.net:5000/my/app:1.1.0`, etc.
The build can be done locally on your machine or automated in a CI/CD pipeline.
Docker does not pull the codebase, only layers with files. If you compile your program, only the the binary should be added and some assets. Layers are cached and only pulled on when needed.
If you want to minimize the images in your registry, you can use multi stage builds in docker. This uses multiple stages to build the last container, which only serves your app, and therefore needs less files. E.g. you can compile your binary in one stage with javac, compile the assets in another stage with nodejs and webpack and copy those resulting files to the last container which only has the java runtime and the static, generated assets. The official docs are here: <https://docs.docker.com/develop/develop-images/multistage-build/>
Upvotes: 1 <issue_comment>username_2: You're right, your approach has a number of issues, including:
* [Your containers are not ephemeral](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#general-guidelines-and-recommendations)
* Each instance of your application has to wait for the build to complete before starting
* What happens if your build fails? Excluding manual testing, you only know for failures at "deploy time"
* The build process will produce the same artifact on each run (provided you did not change the source). Why waste time repeating this process?
Instead of baking the source code of your app inside the container and rebuilding your app on each deploy, you could just build your app (maybe inside another "build" container if you want to make the build environment portable too, i.e. a container used just for building) and add the artifacts that your build produced in another image that will be deployed in your production environment (after testing and QA, because you do have multiple environments, right? :).
Docker image tagging could work this way: build an image on each commit (maybe under the latest image tag) and on each git tag (under the corresponding image tag). Example: for each commit on your master branch you "update" `your-image:latest` and when you tag the 1.0.0 version you produce the `your-image:1.0.0`.
If you have a CI service of sorts you may also automate this procedure.
Note that using the latest tag in production may lead to unstable environments because you don't know which version you are deploying. Use another more-specific tag.
UPDATE: as others have been pointing out, putting your sources inside the container is a strategy that you can apply to the development phase. But it's not your case, because I am assuming you are talking about (pre-)production deplyoments.
Upvotes: 1 <issue_comment>username_3: Binding source code onto the container, is usually done to have a quick feedback loop while developing. This is achieved by avoiding rebuilding the image once code changes as done. This technique is mostly useful for local development purposes. For production environments this technique should not be used.
Docker images are designed to be self-contained. All the dependencies that an image needs should be packaged inside the image. Relying on bind mounts to provide the source code or dependencies is not
recommended as the resulting Docker image is not portable. Whenever you switch to a new machine, you need to transfer that source code and dependencies to that new machine.
Thus adding the source code into the image is the recommended approach. You should also benefit from the standard Docker workflow, where you build the image giving it a specific tag (usually the version), and push
it into a Docker repository. From there on, you only pull the image and start it on any machine that you have.
As for the versioning: When building the image you tag it with the source code version(or git tag or commit id).
```
docker build -t : .
```
And you also pull a specific version using `docker pull :`. Using this technique, you can always revert back to any version of the image.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,643 | 6,007 |
<issue_start>username_0: Introduction to the program
---------------------------
I'm working on a program which stores data like the following:
**- Facts:** A combination of a list of properties (or just one) and a true/false value(called the "truth" value). For example:
>
> parent('John' , '<NAME>') , true
>
>
>
where the property of `John` being a parent of `<NAME>` is true. The code I use for storing a fact:
```
public class Fact implements Serializable{
private boolean truth;
private ArrayList properties;
public Fact(){
truth = false;
properties = new ArrayList();
}
//more constructors, getters, setters and other functions which are not important for this question...
```
**- Rules:** Used to obtain information from facts. They work by having a fact (which I call the derivative fact) imply another fact (which I call the impliant fact):
>
> parent('x' , 'y') => child('y', 'x')
>
>
>
This means that by applying the rule to the fact above we can determine that `<NAME>` is a child of `John`.
**Properties of rules:**
------------------------
**negativity:**
Rules can be either "positive" (if the derivative is true, the impliant is true):
>
> parent('x' , 'y') => child('y', 'x')
>
>
>
or "negative" (if the derivative is true, the impliant is false):
>
> parent('x' , 'y') => not child('y', 'x')
>
>
>
**reversitivity:**
they can also be "reversive" (if the derivative is false, the impliant is true/false):
>
> not parent('x' , 'y') => child('y', 'x')
>
>
>
or (if also negative):
>
> not parent('x' , 'y') => not child('y', 'x')
>
>
>
or "non-reversive" (if the derivative is false, the impliant is unknown):
>
> not parent('x' , 'y') => (unknown)
>
>
>
The code I use to store rules looks like this:
```
public class Rule implements Serializable{
private Property derivative;
private Property impliant;
private boolean negative;
private boolean reversive;
public Rule(Property derivative, Property impliant) throws InvalidPropertyException{
if(!this.validRuleProperty(derivative) || !this.validRuleProperty(impliant))
throw new InvalidPropertyException("One or more properties are invalid");
this.derivative = derivative;
this.impliant = impliant;
negative = false;
reversive = false;
}
//again more constructors, getters, setters and other functions which are not important for this question...
```
**InformationSets**
-------------------
facts and rules are stored in `InformationSets` which bundle these facts and rules together so the user can "question" the information inside. This can be done in 2 different ways:
**1.Fact Check:** Check if a fact is true. This will, given a fact, will return either true or false based on the truth value of that fact. for example the user can ask:
>
> parent('John' , '<NAME>')?
>
>
>
and the program will return `true` while:
>
> parent('<NAME>' , '<NAME>')?
>
>
>
will return `false`.
The question mark indicates the user want to ask a 'fact check' question.
**1.Impliant Check:** This will, for a given fact(called the question-fact), return a list of facts for each rule which can be applied to the question-fact. For example:
>
> parent('John' , '<NAME>')=>
>
>
>
this will return:
>
> child('<NAME>', 'John')
>
>
>
The arrow indicates the user want to ask a 'Impliant check' question.
Questions are coded like this:
```
public class Question {
Fact questionFact;
String operator;
public Question(Fact questionFact, String operator) throws InvalidOperatorException{
if(!validOperator(operator))
throw new InvalidOperatorException("The operator is invalid");
this.questionFact = questionFact;
this.operator = operator;
}
```
Inside the `Operator` field I store `?` or `=>` according to which kind of question the user want to ask.
**The Problem**
---------------
I'm currently at the point where I need to write the method that will ask questions to a given informationSet. So I would need a method that given a question and an informationset returns either `true`, `false` or a list of facts. As far as I know this is not possible in java.
I tried to make a separate method for each question which kinda works but this also causes a problem as I would like to be able to write something like:
`informationSet.ask(question);`
and have the `ask()` method deal with both types of question so I don't have to check which kind of question the user is asking on inside the user-interface code but rather have the back end of the program work it out.
What could be a good way to do this?<issue_comment>username_1: So, the most obvious solution would be to implement your custom response class.
---
Another idea would be to consider using [Either](https://static.javadoc.io/io.vavr/vavr/0.9.2/io/vavr/control/Either.html) from [Vavr](http://vavr.io) which is a solution designed to hold either (pun intended!) values of type A or B exclusively.
So, it could look like:
```
Either> result = foo();
```
Then, you can leverage FP Optional/Stream API-like for manipulating the result:
If you want to access the left side:
```
result.left()
.map(...)
.filter(...)
// ...
```
If you want to access the right side:
```
result.right()
.map(...)
.filter(...)
// ...
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: In addition to the proposition of *Grzegorz*, You should keep in mind Java is strongly typed. This doesn't mean you can't return an **Either** object; which by definition can hold different Classes (types).
But in Java logic, I think it is better to implement more specifies methods, at a narrower granularity (this has many benefits in Object Oriented design, like sharing code and scalability).
So I suggest to propose two methods, one for Boolean, and another for array result. You then make a condition test in a caller function.
Upvotes: 0
|
2018/03/16
| 526 | 2,136 |
<issue_start>username_0: I want to open master details menu from another content page button click event?
so how to do this in xamarin forms?<issue_comment>username_1: You can open menu page by setting `MasterDetailPage.IsPresented` to true.
Simple way to do this is `MessagingCenter`. To `MasterDetailPage` add `MessagingCenter.Subscribe` and in your VM add `MessagingCenter.Send`
Example
```
public class MenuPage : ContentPage
{
public MenuPage()
{
Title = "Test";
Content = new StackLayout
{
Children = {
new Label { Text = "Welcome to Xamarin.Forms!" }
}
};
}
}
public class DetailPage : ContentPage
{
public DetailPage()
{
var button = new Button()
{
Text = "Menu"
};
button.Clicked += Button_Clicked;
Content = new StackLayout
{
Children = {
button
}
};
}
private void Button_Clicked(object sender, EventArgs e)
{
MessagingCenter.Send(EventArgs.Empty, "OpenMenu");
}
}
public partial class App : Application
{
public App()
{
InitializeComponent();
var masterDetailPage = new MasterDetailPage();
masterDetailPage.Master = new MenuPage();
masterDetailPage.Detail = new NavigationPage( new DetailPage());
MainPage = masterDetailPage;
MessagingCenter.Subscribe(this, "OpenMenu", args =>
{
masterDetailPage.IsPresented = true;
});
}
}
```
Upvotes: 2 <issue_comment>username_2: On the page that you want to open the Drawler Menu, you have to set the "Message Send" on Click Event or whatever you want. In my case, I set this in a image tapped event.
```
public void ToggleDrawer(object sender, System.EventArgs e)
{
MessagingCenter.Send(EventArgs.Empty, "OpenMenu");
}
```
Now, on the MasterDetailPage (Mainpage.xaml.cs in my case) you have to put a Message Subscribe on that page above InitializeComponent().
```
MessagingCenter.Subscribe(this, "OpenMenu", args =>
{
IsPresented = !IsPresented;
});
```
That works to me.
Upvotes: 2
|
2018/03/16
| 559 | 1,873 |
<issue_start>username_0: I have a file which contains a list of names stored in a simple text file. Each row contains one name. Now I need to pro grammatically append a new name to this file based on a users input.
For the input itself I use DataBricks widgets - this is working just fine and I have the new name stored in a string object.
Now I need to append this name to my file.
the file is mounted in the DataBricks File System (DBFS) under /mnt/blob/myNames.txt
when trying to read the file like this:
```
f = open("/mnt/blob/myNames.txt", "r")
print f
```
it returns an error "No such file or directory"
So I tried to wrap my new name into a dataframe and append it to the existing file but this also did not work as dataframe.write.save is designed to write into folders
what would be the most simple python could that I could use to append this new name to my file?<issue_comment>username_1: You can write and read files from DBFS with **dbutils**. Use the **dbutils.fs.help()** command in databricks to access the help menu for DBFS.
You would therefore append your name to your file with the following command:
```
dbutils.fs.put("/mnt/blob/myNames.txt", new_name)
```
You are getting the "No such file or directory" error because the DBFS path is not being found. Use **dbfs:/** to access a DBFS path. This is how you should have read the file:
```
f = open("/dbfs/mnt/blob/myNames.txt", "r")
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: You can open the file in append mode using 'a'
```
with open("/dbfs/mnt/sample.txt", "a") as f:
f.write("append values")
```
Now you can view the contents using read mode 'r'
```
with open("/dbfs/mnt/sample.txt", "r") as f_read:
for line in f_read:
print(line)
```
Solution: [Here](https://unmeshasreeveni.blogspot.com/2020/10/how-to-append-content-to-dbfs-file.html)
Upvotes: 4
|
2018/03/16
| 1,265 | 3,970 |
<issue_start>username_0: I have 3 tables:
* `NETWORK_OPERATOR`s;
* `NETWORK_CELL`s: each of them belongs to one `NETWORK_OPERATOR`;
* `IRI`s: each of them can have **either**:
1. a *Network Operator* **or**
2. a *Network Cell*
but one of 1) and 2) is mandatory.
In case of 1) the `netOpId` must exists in `NETWORK_OPERATOR` table;
In case of 2) the `cellId`+`netOpId` must exist in `CELL` table;
Here is a sample DDL code:
```
CREATE TABLE "NETWORK_OPERATOR" (
"NETOPID" INTEGER NOT NULL,
"NAME" VARCHAR2(20),
CONSTRAINT "NETWORK_OPERATOR_PK" PRIMARY KEY ("NETOPID")
)
CREATE TABLE "NETWORK_CELL" (
"CELLID" INTEGER NOT NULL,
"NETOPID" INTEGER NOT NULL,
"NAME" VARCHAR2(20),
CONSTRAINT "NETWORK_CELL_PK" PRIMARY KEY ("CELLID"),
CONSTRAINT "CELL_NETOPS_FK" FOREIGN KEY ("NETOPID") REFERENCES "NETWORK_OPERATOR" ("NETOPID")
)
CREATE TABLE "IRI" (
"IRIID" INTEGER NOT NULL,
"NETOPID" INTEGER,
"CELLID" INTEGER,
"NAME" VARCHAR2(20),
CONSTRAINT "IRI_PK" PRIMARY KEY ("IRIID"),
CONSTRAINT "IRI_NETOPS_FK" FOREIGN KEY ("NETOPID") REFERENCES "NETWORK_OPERATOR" ("NETOPID")
)
```
**In other words**,
a `NETWORK_CELL` is itself always bound to a `NETWORK_OPERATOR`, so that **IF** a `IRI` has a `netOpId` it should be enforced to be an existing `netOpId`, **ELSE IF** a `IRI` has a `cellId`+`netOpId` it should be enforced to be an existing `cellId`+`netOpId`
I see 2 options:
**Option 1:**
Make only `IRI.NETOPID NOT NULL`able and add a composite FK
```
CREATE TABLE "IRI" (
...
"NETOPID" INTEGER NOT NULL,
"CELLID" INTEGER,
...
CONSTRAINT "IRI_CELL_FK" FOREIGN KEY ("CELLID", "NETOPID") REFERENCES "NETWORK_CELL" ("CELLID", "NETOPID")
```
)
(of course there will be a Unique key on `"NETWORK_CELL" ("CELLID", "NETOPID")`)
In other words, an IRI will have a mandatory FK relationship with a Network Operator, and an optional FK relationship with a Network Cell.
The "suspect" thing is that this "optional" FK is composed by a mandatory field and an optional one, on IRI side.
*Oracle RDBMS accepts this (I just tried), but is it a good practice?*
**Option 2:**
Same FK, like in option 1, but leave `IRI.NETOPID` nullable and add a custom constraint that enforce **either** `netOpId` **or** `netOpId`+`cellId`
*I feel this solution more portable, but maybe I'm wrong.*
**The question**
Are there better options?
What's the best practice to deal with this situation and why?
I'm thinking about portability to other RDBMS, too...
Thank you<issue_comment>username_1: According to my understanding one solution could be this one:
```
CREATE TABLE IRI (
IRIID INTEGER NOT NULL,
NETOPID INTEGER,
CELLID INTEGER,
NAME VARCHAR2(20),
CONSTRAINT IRI_PK PRIMARY KEY (IRIID),
CONSTRAINT IRI_NETOPS_FK FOREIGN KEY (NETOPID) REFERENCES NETWORK_OPERATOR (NETOPID),
CONSTRAINT IRI_CELLS_FK FOREIGN KEY (CELLID) REFERENCES NETWORK_CELL (CELLID),
CONSTRAINT IRI_CELL_OR_NETOP CHECK ( NVL(NETOPID, CELLID) IS NOT NULL )
)
```
If you like to enforce that only of value is set you can use
```
CHECK ( NVL(NETOPID, CELLID) IS NOT NULL AND NOT (NETOPID IS NOT NULL AND CELLID IS NOT NULL) )
```
or
```
CHECK ( NVL(NETOPID, CELLID) IS NOT NULL AND NETOPID||CELLID IN (NETOPID, CELLID) )
```
or
```
CHECK ( (NETOPID IS NULL AND CELLID IS NOT NULL) OR (NETOPID IS NOT NULL AND CELLID IS NULL) )
```
Upvotes: 0 <issue_comment>username_2: Your option 1 is OK. The way default FK (foreign key) declaration mode MATCH SIMPLE (usually the only one implemented) works, a FK subrow value with any NULLs satisfies its constraint. So you can have IRI FKs (netid) & (netid, cellid)--plus netid NOT NULL. (You seem to have forgotten the NOT NULL in your first IRI though not the second.)
Then the only cases for the column pair are (non-null, null) & (non-null, non-null). A netid must exist; a non-null cellid must exist with that netid & a NULL cellid is OK.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 733 | 2,324 |
<issue_start>username_0: ```
String test1 = "test";
String test2 = "test";
System.out.println(test1 == test2); // true
```
test1 and test2 points to the same object, so the outcome is true.
```
String test1 = new String("test");
String test2 = new String("test");
System.out.println(test1 == test2); // false
```
test1 and test2 points to the different object, so the outcome is false.
so the question is that, what is the difference between,
```
int[] test = {1,2,3}; // literal
int[] test = new int[] {1,2,3}; // non-literal
```
I am confused of this since,
```
int[] test1 = new int[]{1,2,3};
int[] test2 = new int[]{1,2,3};
System.out.println(test1 == test2); // false
```
and
```
int[] test1 = {1,2,3};
int[] test2 = {1,2,3};
System.out.println(test1 == test2); // also prints false
```
I expected that the latter case`s outcome would be true, the same reason with the case of String example above.
Is test1 and test2 pointing at the different array object?<issue_comment>username_1: Java `int[]` are **not** ever [intern'd](https://en.wikipedia.org/wiki/String_interning). Only `String` (and the wrapper types for limited values). **tl;dr** Don't compare object equality with `==`. That only compares references with instance types. Here array equality can be determined with [`Arrays.equals(int[], int[])`](https://docs.oracle.com/javase/8/docs/api/java/util/Arrays.html#equals-int:A-int:A-) and `String` with `String.equals`. Arrays don't override `Object#equals(Object)`.
```
int[] test1 = { 1, 2, 3 };
int[] test2 = { 1, 2, 3 };
System.out.println(Arrays.equals(test1, test2)); // <-- true
System.out.println(test1.equals(test2)); // <-- false
```
As for why they differ - Java `String` is immutable (as are the primitive types). Here we can change a value in one of the arrays. We would be surprised (as users) if the other also changed.
Upvotes: 2 <issue_comment>username_2: for declaration array are both ways possible
```
int[] a = new int[] {1,2,3,4,5};
int[] b = {7,8,9,10};
```
but after the declaration, with the first way you can assign new array to existing variable of the same type, with second way no.
```
a = new int[] {1,1,1,1}; // ok
b = {2,2,2,2,2}; // error: illegal start of expression
```
Upvotes: 0
|
2018/03/16
| 382 | 1,181 |
<issue_start>username_0: Here is JSON:
```
{
"first":0,
"rows":100,
"data":[
{
"id":326,
"tag":"QATA9",
"workNo":"qat12345"
}
],
"totalRecords":1
}
```
And my code is :
```
JsonPath jsonPathEvaluator = response.jsonPath();
wID = jsonPathEvaluator.get("data.id");
System.out.println("id is "+ wID);
String responseBody = response.getBody().asString();
int statusCode = response.getStatusCode();
```
In output it shows
[326]
But i need value only 326<issue_comment>username_1: The `[]` delimits an array, so the library is treating it as an array. Just pick the first element, and you should be fine.
Try this:
```
JsonPath jsonPathEvaluator = response.jsonPath();
wID = jsonPathEvaluator.get("data.id")[0];
System.out.println("id is "+ wID);
```
Then, again, you should also have in mind that, the fact that an array was used in the first place may indicate that you may have more than one element; in that case, you should simply loop through the array.
Upvotes: 3 [selected_answer]<issue_comment>username_2: try this
```
JsonPath jsonPathEvaluator = response.jsonPath();
wID = jsonPathEvaluator.get("data[0].id");
System.out.println("id is "+ wID);
```
Upvotes: 1
|
2018/03/16
| 797 | 2,622 |
<issue_start>username_0: First request for the JSON in response body which looks like this:
```
{"data":{"userId":"USR-0000000000000001","accessToken":"<KEY>","refreshToken":"<KEY>","expiresIn":"2018-03-16 20:14:00","tokenType":"bearer"}
```
I extract the value of the "accessToken" attribute using the
"Regular Expression Extractor":
[regular expression](https://i.stack.imgur.com/PblWO.png)
Then I pass the "Authorization" variable to the header, this step i need to put accesstoken to get information on my profile:
[enter image description here](https://i.stack.imgur.com/uW2Ef.png)
But then showing me this response on the results tree
[enter image description here](https://i.stack.imgur.com/HuWhM.png)
and response showing
```
"{"error":{"errorCode":1001,"errorMessage":"Authentication failed"}}"
```
What i want is on the next request can sucessfully read the accesstoken. can somebody help if theres something wrong on this?
Thanks<issue_comment>username_1: Json Exractor also used to exctract the json resonse value. here below images are helpful for extracting the value from json . storing it in the variable and using it for the header request.
[](https://i.stack.imgur.com/uiowl.png)
[](https://i.stack.imgur.com/wUUqY.png)
Upvotes: 2 <issue_comment>username_2: You should put in the Regular expression extractor the name of the created variable.
Replace in regular expression from `Authorization` to `accessToken` because it is the name of the variable created that you are using
Upvotes: 1 [selected_answer]<issue_comment>username_3: 1. You need to change header value to `Bearer ${Authorization}`, as per [RFC 6750](https://www.rfc-editor.org/rfc/rfc6750#section-6.1.1) it should start with `Bearer` and header values MAY be case sensitive
2. It makes more sense to go for [JSON Extractor](http://jmeter.apache.org/usermanual/component_reference.html#JSON_Extractor) instead of the Regular Expression Extractor, when it comes to [JSON](https://en.wikipedia.org/wiki/JSON) data regular expressions. You should be able to extract the token value using simple [JSON Path](https://github.com/json-path/JsonPath) query like:
[](https://i.stack.imgur.com/bt9SD.png)
More information: [API Testing With JMeter and the JSON Extractor](https://www.blazemeter.com/blog/api-testing-with-jmeter-and-the-json-extractor)
Upvotes: 0
|
2018/03/16
| 798 | 2,652 |
<issue_start>username_0: I have a simple HTML file, from which I want to load .js file.
I have these files (files are in the same folder):
start.js
```
var http = require('http');
var fs = require('fs');
http.createServer(function (req, response) {
fs.readFile('index.html', 'utf-8', function (err, data) {
response.writeHead(200, { 'Content-Type': 'text/html' });
response.write(data);
response.end();
});
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');
```
index.html
```
main project demo
```
and SetData.js
```
console.log("Its me");
```
Im using node.js, so I start my project with `node start.js`
In index.html I want to call local SetData.js file with
```
```
But nothing shows on web, only this error
[](https://i.stack.imgur.com/UzAPr.png)
I already tried to call .js file from another folder or call it from body part. Always the same error.
How can I load local .js file from HTML?<issue_comment>username_1: Json Exractor also used to exctract the json resonse value. here below images are helpful for extracting the value from json . storing it in the variable and using it for the header request.
[](https://i.stack.imgur.com/uiowl.png)
[](https://i.stack.imgur.com/wUUqY.png)
Upvotes: 2 <issue_comment>username_2: You should put in the Regular expression extractor the name of the created variable.
Replace in regular expression from `Authorization` to `accessToken` because it is the name of the variable created that you are using
Upvotes: 1 [selected_answer]<issue_comment>username_3: 1. You need to change header value to `Bearer ${Authorization}`, as per [RFC 6750](https://www.rfc-editor.org/rfc/rfc6750#section-6.1.1) it should start with `Bearer` and header values MAY be case sensitive
2. It makes more sense to go for [JSON Extractor](http://jmeter.apache.org/usermanual/component_reference.html#JSON_Extractor) instead of the Regular Expression Extractor, when it comes to [JSON](https://en.wikipedia.org/wiki/JSON) data regular expressions. You should be able to extract the token value using simple [JSON Path](https://github.com/json-path/JsonPath) query like:
[](https://i.stack.imgur.com/bt9SD.png)
More information: [API Testing With JMeter and the JSON Extractor](https://www.blazemeter.com/blog/api-testing-with-jmeter-and-the-json-extractor)
Upvotes: 0
|
2018/03/16
| 2,682 | 8,114 |
<issue_start>username_0: I have prepared an [SQL Fiddle](http://sqlfiddle.com/#!17/8f294/1) for my question -
In a 2-player word game I store players and their games in the 2 tables:
```
CREATE TABLE players (
uid SERIAL PRIMARY KEY,
name text NOT NULL
);
CREATE TABLE games (
gid SERIAL PRIMARY KEY,
player1 integer NOT NULL REFERENCES players ON DELETE CASCADE,
player2 integer NOT NULL REFERENCES players ON DELETE CASCADE
);
```
And the letter tiles placing moves and resulting words and scores are stored in another 2 tables:
```
CREATE TABLE moves (
mid BIGSERIAL PRIMARY KEY,
uid integer NOT NULL REFERENCES players ON DELETE CASCADE,
gid integer NOT NULL REFERENCES games ON DELETE CASCADE,
played timestamptz NOT NULL,
tiles jsonb NOT NULL
);
CREATE TABLE scores (
mid bigint NOT NULL REFERENCES moves ON DELETE CASCADE,
uid integer NOT NULL REFERENCES players ON DELETE CASCADE,
gid integer NOT NULL REFERENCES games ON DELETE CASCADE,
word text NOT NULL CHECK(word ~ '^[A-Z]{2,}$'),
score integer NOT NULL CHECK(score >= 0)
);
```
Here I fill the above tables with test data containing a game and 2 players (Alice and Bob):
```
INSERT INTO players (name) VALUES ('Alice'), ('Bob');
INSERT INTO games (player1, player2) VALUES (1, 2);
```
Their interchanging moves are below, sometimes a single move can produce 2 words:
```
INSERT INTO moves (uid, gid, played, tiles) VALUES
(1, 1, now() + interval '1 min', '[{"col": 7, "row": 12, "value": 3, "letter": "A"}, {"col": 8, "row": 12, "value": 10, "letter": "B"}, {"col": 9, "row": 12, "value": 1, "letter": "C"}, {"col": 10, "row": 12, "value": 2, "letter": "D"}]
'::jsonb),
(2, 1, now() + interval '2 min', '[{"col": 7, "row": 12, "value": 3, "letter": "X"}, {"col": 8, "row": 12, "value": 10, "letter": "Y"}, {"col": 9, "row": 12, "value": 1, "letter": "Z"}]
'::jsonb),
(1, 1, now() + interval '3 min', '[{"col": 7, "row": 12, "value": 3, "letter": "K"}, {"col": 8, "row": 12, "value": 10, "letter": "L"}, {"col": 9, "row": 12, "value": 1, "letter": "M"}, {"col": 10, "row": 12, "value": 2, "letter": "N"}]
'::jsonb),
(2, 1, now() + interval '4 min', '[]'::jsonb),
(1, 1, now() + interval '5 min', '[{"col": 7, "row": 12, "value": 3, "letter": "A"}, {"col": 8, "row": 12, "value": 10, "letter": "B"}, {"col": 9, "row": 12, "value": 1, "letter": "C"}, {"col": 10, "row": 12, "value": 2, "letter": "D"}]
'::jsonb),
(2, 1, now() + interval '6 min', '[{"col": 7, "row": 12, "value": 3, "letter": "P"}, {"col": 8, "row": 12, "value": 10, "letter": "Q"}]
'::jsonb);
INSERT INTO scores (mid, uid, gid, word, score) VALUES
(1, 1, 1, 'ABCD', 40),
(2, 2, 1, 'XYZ', 30),
(2, 2, 1, 'XAB', 30),
(3, 1, 1, 'KLMN', 40),
(3, 1, 1, 'KYZ', 30),
(5, 1, 1, 'ABCD', 40),
(6, 2, 1, 'PQ', 20),
(6, 2, 1, 'PABCD', 50);
```
As you can see above, the `tiles` column is always a JSON list of objects.
But I only need to retrieve the single property of the objects: `letter`.
So here is my SQL-code (to be used in PHP script displaying player moves in a certain game):
```
SELECT
STRING_AGG(x->>'letter', ''),
STRING_AGG(y, ', ')
FROM (
SELECT
JSONB_ARRAY_ELEMENTS(m.tiles) AS x,
FORMAT('%s (%s)', s.word, s.score) AS y
FROM moves m
LEFT JOIN scores s
USING (mid)
WHERE m.gid = 1
GROUP BY mid, s.word, s.score
ORDER BY played ASC
) AS z;
```
Unfortunately, it does not work as expected.
The both [STRING\_AGG](https://www.postgresql.org/docs/10/static/functions-aggregate.html) calls put everything together in two huge strings, despite me trying to `GROUP BY mid`:
[](https://i.stack.imgur.com/KPamX.png)
Is there please a way to split the resulting strings by a `mid` (aka move id)?
**UPDATE:**
My problem isn't the sorting. My problem is that I get 2 huge strings, while I would expect multiple strings, one pair per move id (aka `mid`).
Here is my expected output, does anybody please have a suggestion on how to achieve it?
```
mid "concatenated 'letter' from JSON" "concatenated words and scores"
1 'ABCD' 'ABCD (40)'
2 'XYZ' 'XYZ (30), XAB (30)'
3 'KLMN' 'KLMN (40), KYZ (30)'
5 'ABCD' 'ABCD (40)'
6 'PQ' 'PQ (20), PABCD (50)'
```
**UPDATE #2:**
I have followed the suggestion by Laurenz (thank you! Here the [SQL Fiddle](http://sqlfiddle.com/#!17/8f294/8)) with:
```
SELECT
mid,
STRING_AGG(x->>'letter', '') AS tiles,
STRING_AGG(y, ', ') AS words
FROM (
SELECT
mid,
JSONB_ARRAY_ELEMENTS(m.tiles) AS x,
FORMAT('%s (%s)', s.word, s.score) AS y
FROM moves m
LEFT JOIN scores s
USING (mid)
WHERE m.gid = 1
) AS z
GROUP BY mid
ORDER BY mid;
```
But for some reason the "word (score)" entries are multiplied:
[](https://i.stack.imgur.com/HHfoz.png)<issue_comment>username_1: If you want to group by `mid`, you have to add that column to the `SELECT` list of the inner query and add `GROUP BY mid` to the outer query.
You can use `DISTINCT` inside the aggregate to remove duplicates:
```
SELECT
mid,
STRING_AGG(DISTINCT x->>'letter', '') AS tiles,
STRING_AGG(DISTINCT y, ', ') AS words
FROM (
SELECT
mid,
JSONB_ARRAY_ELEMENTS(m.tiles) AS x,
FORMAT('%s (%s)', s.word, s.score) AS y
FROM moves m
LEFT JOIN scores s
USING (mid)
WHERE m.gid = 1
) AS z
GROUP BY mid;
```
ORDER BY mid;
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you want the results in a particular ordering, then using the `order by` clause in the aggregation call, as described in the documentation:
```
SELECT STRING_AGG(x->>'letter', '' ORDER BY played),
STRING_AGG(y, ', ' ORDER BY played)
FROM (SELECT JSONB_ARRAY_ELEMENTS(m.tiles) AS x,
FORMAT('%s (%s)', s.word, s.score) AS y
FROM moves m LEFT JOIN
scores s
USING (mid)
WHERE m.gid = 1
GROUP BY mid, s.word, s.score
) z;
```
As for using a subquery, note the [documentation](https://www.postgresql.org/docs/10/static/functions-aggregate.html):
>
> This ordering is unspecified by default, but can be controlled by
> writing an ORDER BY clause within the aggregate call, as shown in
> Section 4.2.7. Alternatively, supplying the input values from a sorted
> subquery will **usually** work.
>
>
>
I guess you have found a case where "usually" doesn't apply. The safer method is the one using explicit syntax.
EDIT:
Your outer query is an aggregation query that returns one row. So everything is brought together.
If you want one row per `mid`, you need a `GROUP BY` in the outer query:
```
SELECT STRING_AGG(x->>'letter', '' ORDER BY played),
STRING_AGG(y, ', ' ORDER BY played)
FROM (SELECT JSONB_ARRAY_ELEMENTS(m.tiles) AS x,
FORMAT('%s (%s)', s.word, s.score) AS y
FROM moves m LEFT JOIN
scores s
USING (mid)
WHERE m.gid = 1
GROUP BY mid, s.word, s.score
) z
GROUP BY mid;
```
Upvotes: 2 <issue_comment>username_3: I have been able to get rid of DISTINCT by using CTE (here [SQL Fiddle](http://sqlfiddle.com/#!17/4ef8b/50)):
```
WITH cte1 AS (
SELECT
mid,
STRING_AGG(x->>'letter', '') AS tiles
FROM (
SELECT
mid,
JSONB_ARRAY_ELEMENTS(tiles) AS x
FROM moves
WHERE gid = 1
) AS z
GROUP BY mid),
cte2 AS (
SELECT
mid,
STRING_AGG(y, ', ') AS words
FROM (
SELECT
mid,
FORMAT('%s (%s)', word, score) AS y
FROM scores
WHERE gid = 1
) AS z
GROUP BY mid)
SELECT
mid,
tiles,
words
FROM cte1
JOIN cte2 using (mid)
ORDER BY mid ASC;
```
[](https://i.stack.imgur.com/Rwu5e.png)
Upvotes: 0
|
2018/03/16
| 797 | 2,631 |
<issue_start>username_0: I have written my code to show some list of elements on my page. Also, I have written javascript code to slice the elements. So my page has 5 elements displaying initially and every time a user clicks show more link then an additional 5 elements get displayed. My question is when I click showmore I would like to have focus on the 6th element of the list and likewise 11th, 16th etc., Currently, I do not receive the focus on 6th element of the list.
Here is a snippet:
```
$(document).ready(function(){
$('ul li:gt(6)').hide();
$('#showMore').click(function() {
$('#payment li:hidden').slice(0, 5).css("display", "list-item");
$('#payment li:visible:last').focus();
// To illustrate that the last LI gets selected properly
$('#payment li:visible:last').css({"background" : "red"});
});
});
Page Title
* xyz
* xyz
* xyz
* xyz
* xyz
* xyz
* xyz
* xyz
- [Show More](javascript:;)
```
Thanks<issue_comment>username_1: Well, I don't know how you are creating/populating the `-`, but using javascript to make an item focus, you would use:
```
document.getElementById("showMore").onclick = function focus(){
document.getElementByClassName("li.item").focus();
}
```
'.item' would be a class defined to every `-` that you want (6th, 11th, etc).
To define that class, that would depend on how you are creating/populating them
Upvotes: 2 [selected_answer]<issue_comment>username_2: I did change your HTML to show the button outside the UL, but you should ofcourse alter it a bit to suit your needs, maybe define the list items that aren't the button with classes and include those in the selectors
<https://jsfiddle.net/wfL61L8o/7/>
HTML
```
* xyz 1
* xyz 2
* xyz 3
* xyz 4
* xyz 5
* xyz 6
* xyz 7
* xyz 8
* xyz 9
* xyz 10
* xyz 11
* xyz 12
* [${text.paymentMethod.showMore}](javascript:;)
```
JAVASCRIPT
```
$(function() {
if ($('#payment li').length <= 6) {
$('#showMore').hide();
}
for(var i = 0; i < $('#payment li').length; i++) {
if(i > 5) {
$('#payment li').eq(i).hide()
}
}
$('#showMore').click(function() {
$('#payment li:hidden').slice(0, 5).css("display", "list-item");
$('#payment li:visible:last').focus();
// To illustrate that the last LI gets selected properly
$('#payment li:visible:last').css({"background" : "red"});
});
});
```
Upvotes: 0 <issue_comment>username_3: Maybe I'm stating the obvious, but an `-` element is not natively focusable. If you want to programmatically move the focus to an `-`, it must have `tabindex='-1'`.
```
- xyz
```
Now the focus() method should work.
Upvotes: 2
|
2018/03/16
| 339 | 1,338 |
<issue_start>username_0: i want to send the json array from one view controller to another view controller and the array should be populated in pickerview . but i am unable to send the array . i am getting the array but not able to send it
```
let mydata = json["data"] as! NSArray
print("My Data is \(mydata)")
var sendData = [NSArray]()
sendData = mydata as! Array
self.performSegue(withIdentifier: "checkLoginViewController", sender: sendData)
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
if let destinationVC = segue.destination as? SignupViewController, let sendData = sender as? [String]{
destinationVC.dept = sendData
}
}
```
secondVC:
```
var dept = [String]()
```<issue_comment>username_1: The problem is that you use `Array`
```
endData = mydata as! Array
```
and in prepare cast it like
```
let sendData = sender as? [String]
```
which for sure cast will fail
Upvotes: 2 <issue_comment>username_2: ```
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
if let destinationVC = segue.destination as? SignupViewController, let sendData = sender as? {
destinationVC.dept = sendData
}
}
```
try this code
Upvotes: 0
|
2018/03/16
| 1,038 | 3,229 |
<issue_start>username_0: ```
public static List removeOddNumbers(ArrayList list) {
if (list.isEmpty()) { throw new Error(); }
List toRemove = new ArrayList<>();
for (int i : list) {
if (i % 2 != 0) { toRemove.add(i); }
}
list.removeAll(toRemove);
return list;
}
```
I'm trying to remove all odd elements from an ArrayList and then returning that ArrayList.
I'm getting an error pointing to List Integer in this first line
**Test:**
```
ArrayList arrayList = new ArrayList();
Collections.addAll(arrayList, 3, 5, 6, 24, 7, 9, 1, 8, 28, 11);
ArrayList result = removeOddNumbers(arrayList);
System.out.println(result);
```
**Result:**
```
[6, 24, 8, 28]
```<issue_comment>username_1: If you are using Java 8, you can just use [Collection::removeIf](https://docs.oracle.com/javase/8/docs/api/java/util/Collection.html#removeIf-java.util.function.Predicate-) :
```
list.removeIf(i -> i % 2 != 0);
```
---
The full method should look like this :
```
public static List removeOddNumbers(List list) {
list.removeIf(i -> i % 2 != 0);
return list;
}
```
---
**Example**
```
List list = new ArrayList<>(Arrays.asList(1, 2, 3, 6, 5, 4));
list.removeIf(i -> i % 2 != 0);
System.out.println(removeOddNumbers(list));
```
**Outputs**
```
[2, 6, 4]
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: lambda way :
```
public static List filter(List numberList) {
return numberList.stream()
.filter(number -> number % 2 != 0)
.collect(Collectors.toList());
}
```
You should call this method with this :
```
List list = Arrays.asList(3, 5, 6, 24, 7, 9, 1, 8, 28, 11);
List result = removeOddNumbers(numbers);
System.out.println(result);
```
The problem is that the returning type of the method is `List` but your code expect an `ArrayList` the solution is to simply use the generic class `List` or if you want use your code it should be like this :
```
ArrayList arrayList = new ArrayList();
Collections.addAll(arrayList, 3, 5, 6, 24, 7, 9, 1, 8, 28, 11);
List result = removeOddNumbers(arrayList);
System.out.println(result);
```
**N.B** The method posted by @YCF\_L edit the list that you pass by the parameter, my method instead create a new list leaving the original list untouched
Upvotes: 1 <issue_comment>username_3: ```
// you can use Iterator.remove()
List arr = new ArrayList();
arr .add(10);
arr .add(20);
arr .add(30);
arr .add(1);
arr .add(2);
// Remove odd elements
// Iterator.remove()
Iterator itr = arr .iterator();
while (itr.hasNext())
{
int x = (Integer)itr.next();
if (x % 2 != 0)
itr.remove();
}
System.out.println("Modified ArrayList : " + arr);
```
Upvotes: 0 <issue_comment>username_4: You should go for Iterator to remove items from List. That would save the usage of a temporary list where you store odd elements.
```
public static List removeOddNumbers(List arrayList) {
Iterator itr = arrayList .iterator();
while (itr.hasNext())
{
int x = (Integer)itr.next();
if (x % 2 != 0) {
itr.remove();
}
}
return arrayList;
}
```
OR
You can go for a lot easier code with Java8.
Upvotes: 2
|
2018/03/16
| 1,790 | 5,646 |
<issue_start>username_0: In Google Chrome, `element.scrollIntoView()` with `behavior: 'smooth'` doesn't work on multiple containers at the same time. As soon as smooth scrolling is triggered on one container, the second container stops scrolling. In Firefox, this problem doesn’t exist; both containers can scroll simultaneously.
My workaround is using `behavior: 'instant'`, but I like to use `behavior: 'smooth'` for a better user experience.
Example
-------
[**Here**](https://plnkr.co/edit/MEUMMw8jyKt3KChxxZOl?p=preview) is a plunker using Angular
### html
```
In Google Chrome element.scrollIntoView() with behavior 'smooth' doesn't work, if scrolling more containers at the same time.
Shwon in case 'All Smooth (200ms sequence)' the container stopps scrolling.
In Firefox all works.
Reset
All Instant
Works
All Smooth (simultaneously)
Only one container is scrolled
All Smooth (200ms sequence)
Only last container is scrolled to 85 - Others will stop, if next container is triggered
Single Container Smooth
Works
Container {{ container }}
{{ number }}
```
### typescript
```
export class App {
name: string;
containers = [0, 1, 2]
content = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100]
constructor() {
this.name = `Angular v${VERSION.full}`
}
private scroll(container: number, row: number, behavior: string) {
let element = document.getElementById(container + '_' + row);
element.scrollIntoView({
behavior: behavior,
block: 'start',
inline: 'nearest'
});
}
reset() {
this.containers.forEach(container => {
this.scroll(container, 1, 'instant');
});
}
scrollSingelContainer(container: number) {
this.scroll(container, 85, 'smooth');
}
scrollAllInstant() {
this.containers.forEach(container => {
this.scroll(container, 85, 'instant');
});
}
scrollAllSmooth() {
this.containers.forEach(container => {
this.scroll(container, 85, 'smooth');
});
}
scrollAllSmoothSequenced() {
this.containers.forEach(container => {
setTimeout(() => {
this.scroll(container, 85, 'smooth');
}, 200 * container);
});
}
onScroll(container: number) {
console.log('Scroll event triggerd by container ' + container);
}
}
```<issue_comment>username_1: A similar question was asked here: [scrollIntoView() using smooth function on multiple elements in Chrome](https://stackoverflow.com/questions/57214373/scrollintoview-using-smooth-function-on-multiple-elements-in-chrome), but the answer is not satisfying, as it states, that this isn't a bug but.
But it seems to be a bug and it is already reported in the Chromium bug list:
* <https://bugs.chromium.org/p/chromium/issues/detail?id=1121151>
* <https://bugs.chromium.org/p/chromium/issues/detail?id=1043933>
* <https://bugs.chromium.org/p/chromium/issues/detail?id=833617>.
For smooth scrolling of multiple elements at the same time using `scrollIntoView` (for at least some of the elements) we need to wait for a fix from the Chromium team.
An **alternative approach** is using [`scrollTo`](https://developer.mozilla.org/en-US/docs/Web/API/Element/scrollTo) which is working for multiple elements also in Chrome. See scenario 5 in this example: <https://jsfiddle.net/2bnspw8e/8/>.
The downside is that you need to get the next scrollable parent of the element you want to scroll into view (see <https://stackoverflow.com/a/49186677> for an example), calculate to offset which is needed to scroll the parent to the element and call `parent.scrollTo({top: calculatedOffset, behavior: 'smooth'})`.
Upvotes: 4 <issue_comment>username_2: If you want a workaround for Chrome that uses `smooth` whenever possible (when only a single element is being scrolled into view, but falls back to `auto` when needed to make sure multiple elements can be scrolled at the same time (and without having to use `scrollTo`), you can do something like this:
```
const IS_CHROME = navigator.userAgent.includes('Chrome')
let lastScrolledElement = null
let timeoutID = 0
function scrollElementIntoView(element) {
if (!element) return
if (IS_CHROME && lastScrolledElement) {
lastScrolledElement.scrollIntoView({
behavior: 'auto',
block: 'nearest',
inline: 'center',
})
}
lastScrolledElement = element
window.clearTimeout(timeoutID)
timeoutID = window.setTimeout(() => {
lastScrolledElement = null
}, 250)
element.scrollIntoView({
behavior: 'smooth',
block: 'nearest',
inline: 'center',
})
}
```
You could even remove the `IS_CHROME` check if you want to avoid browser sniffing and just have this enabled for all browsers.
This way, if you scroll element A into view and after a second or two you scroll element B into view, both will look scroll smoothly, as by the time `a.scrollElementIntoView(...)` is called the second time, it would have scrolled already, and the timeout would have been cleared.
However, if you call `scrollElementIntoView` 2 times consecutively, when the second call is made, the first element would probably be still scrolling into view. Therefore, we force it to scroll immediately (with `behavior: auto`) before calling `scrollIntoView` for the second element (which will scroll smoothly).
Upvotes: 1
|
2018/03/16
| 527 | 1,770 |
<issue_start>username_0: I have a string as such:
```
teststring = 'Index: WriteVTKOutput.FOR\\r\\n======================================\\r\\n'
```
I'd like to split it based on the `'\\r'` and `'\\n'` characters, such that I get the following result:
```
testlist = ['Index: WriteVTKOutput.FOR', '======================================']
```
I tried the following commands, none of which worked:
```
teststring.split(r'\r\n')
teststring.splitlines()
```
How does a man split that string into those delimiters while retaining his dignity and keeping it real?
Thanks<issue_comment>username_1: This should do the job:
```
lst = teststring.split("\\r\\n")[:-1]
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: ```
teststring = 'Index: WriteVTKOutput.FOR\\r\\n======================================\\r\\n'
print(teststring.split("\\r\\n")[:-1])
print(teststring.strip("\\r\\n").split("\\r\\n"))
```
Output
```
['Index: WriteVTKOutput.FOR', '======================================']
['Index: WriteVTKOutput.FOR', '======================================']
```
Upvotes: 0 <issue_comment>username_3: `splitlines` won't work because your separators aren't really line separators. They're just the actual `\r\n` chars.
You can get rid of trailing/leading empty fields that `split` generates by an extra comprehension in case your string starts or ends by `r"\r\n"`:
```
[x for x in teststring.split(r'\r\n') if x]
```
result:
```
['Index: WriteVTKOutput.FOR', '======================================']
```
Upvotes: 1 <issue_comment>username_4: You need : `'\\r\\n'` or r'\r\n'
```
teststring = 'Index: WriteVTKOutput.FOR\\r\\n======================================\\r\\n'
new_string = [x for x in teststring.split('\\r\\n') if x]
```
Upvotes: 0
|
2018/03/16
| 1,489 | 5,621 |
<issue_start>username_0: In my website using HTML, Bootstrap, Php I have created a contact us form. The form is working fine and am able to get the message. But the issue is that, after clicking the submit button a new page opens up for "Thank you message". Can you please guide me on how I can show the thank you message just below the submit button. My code is as mentioned below.
```
**HTML 5 code**
The below line should be to encourage the client to contact us to share their project details.
### Tell us about your project
Firstname \*
Lastname \*
Email \*
Phone
Message \*
```
**PHP Code**
```
php
// require ReCaptcha class
require('recaptcha-master/src/autoload.php');
// configure
// an email address that will be in the From field of the email.
$from = 'Demo contact form <<EMAIL>';
// an email address that will receive the email with the output of the form
$sendTo = 'Demo contact form ';
// subject of the email
$subject = 'New message from contact form';
// form field names and their translations.
// array variable name => Text to appear in the email
$fields = array('name' => 'Name', 'surname' => 'Surname', 'phone' => 'Phone', 'email' => 'Email', 'message' => 'Message');
// message that will be displayed when everything is OK :)
$okMessage = 'Contact form successfully submitted. Thank you, I will get back to you soon!';
// If something goes wrong, we will display this message.
$errorMessage = 'There was an error while submitting the form. Please try again later';
// ReCaptch Secret
$recaptchaSecret = '\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*';
// let's do the sending
// if you are not debugging and don't need error reporting, turn this off by error\_reporting(0);
error\_reporting(E\_ALL & ~E\_NOTICE);
try {
if (!empty($\_POST)) {
// validate the ReCaptcha, if something is wrong, we throw an Exception,
// i.e. code stops executing and goes to catch() block
if (!isset($\_POST['g-recaptcha-response'])) {
throw new \Exception('ReCaptcha is not set.');
}
// do not forget to enter your secret key from https://www.google.com/recaptcha/admin
$recaptcha = new \ReCaptcha\ReCaptcha($recaptchaSecret, new \ReCaptcha\RequestMethod\CurlPost());
// we validate the ReCaptcha field together with the user's IP address
$response = $recaptcha->verify($\_POST['g-recaptcha-response'], $\_SERVER['REMOTE\_ADDR']);
if (!$response->isSuccess()) {
throw new \Exception('ReCaptcha was not validated.');
}
// everything went well, we can compose the message, as usually
$emailText = "You have a new message from your contact form\n=============================\n";
foreach ($\_POST as $key => $value) {
// If the field exists in the $fields array, include it in the email
if (isset($fields[$key])) {
$emailText .= "$fields[$key]: $value\n";
}
}
// All the neccessary headers for the email.
$headers = array('Content-Type: text/plain; charset="UTF-8";',
'From: ' . $from,
'Reply-To: ' . $from,
'Return-Path: ' . $from,
);
// Send email
mail($sendTo, $subject, $emailText, implode("\n", $headers));
$responseArray = array('type' => 'success', 'message' => $okMessage);
}
} catch (\Exception $e) {
$responseArray = array('type' => 'danger', 'message' => $e->getMessage());
}
if (!empty($\_SERVER['HTTP\_X\_REQUESTED\_WITH']) && strtolower($\_SERVER['HTTP\_X\_REQUESTED\_WITH']) == 'xmlhttprequest') {
$encoded = json\_encode($responseArray);
header('Content-Type: application/json');
echo $encoded;
} else {
echo $responseArray['message'];
}
```
**Contact.js code**
```
$(function () {
window.verifyRecaptchaCallback = function (response) {
$('input[data-recaptcha]').val(response).trigger('change');
}
window.expiredRecaptchaCallback = function () {
$('input[data-recaptcha]').val("").trigger('change');
}
$('#contact-form').validator();
$('#contact-form').on('submit', function (e) {
if (!e.isDefaultPrevented()) {
var url = "contact.php";
$.ajax({
type: "POST",
url: url,
data: $(this).serialize(),
success: function (data) {
var messageAlert = 'alert-' + data.type;
var messageText = data.message;
var alertBox = '×' + messageText + '';
if (messageAlert && messageText) {
$('#contact-form').find('.messages').html(alertBox);
$('#contact-form')[0].reset();
grecaptcha.reset();
}
}
});
return false;
}
})
});
```<issue_comment>username_1: Your notice message is displayed on this part of code :
```
```
You just need to move this part of code to bottom.
Like this :
```
**HTML 5 code**
The below line should be to encourage the client to contact us to share their project details.
### Tell us about your project
Firstname \*
Lastname \*
Email \*
Phone
Message \*
```
and Jquery (solution proposed by <NAME>)
```
$('#contact-form').submit(function (event) {
//prevent the default action, which is to submit the form
event.preventDefault();
//post your from below using ajax
})
```
Upvotes: 0 <issue_comment>username_2: You need to prevent the default action of the form on submit
**jQuery**
```
$('#contact-form').submit(function (event) {
//prevent the default action, which is to submit the form
event.preventDefault();
//post your from below using ajax
})
```
Upvotes: 2
|
2018/03/16
| 1,611 | 4,779 |
<issue_start>username_0: I have plans for a certain program I want to build and for that I need a way to generate random assembly code and modify it.
I know how to use the `system()` function (C language) and I wanted to know if there is a way to create a file that contains only a raw hex code and then use `system()` to compile it in a compiler like NASM into a binary executable.
**Note: don't answer because i am about to make another page that answers to my needs... this is too general of a question for me. (sorry for the inconvenience...)**<issue_comment>username_1: If you want to use NASM for handling the correct binary executable meta data, and format cruft, and you want to produce only the main body of code, you can write to disk new ".asm" file with some header template, like:
```
bits 64
global _start
_start:
```
And then add new lines to that:
```
dw 0x1234
dw 0xc3d5
...
```
Store such complete file under some "temp1234.asm" name, and then compile it with NASM into linux ELF 64b binary (you didn't specify in the question your target platform and CPU, so I'm using what is familiar and most common platform+OS today for example, for other platforms details may differ):
```
nasm -f elf64 temp1234.asm; ld -b elf64-x86-64 -o temp1234 temp1234.o
```
(using `system()` to execute this compilation step) and then you can execute the resulting `temp1234` binary with `system()` too.
---
If you want the resulting file to contain only your data, then you can use the C [`size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream)`](https://www.tutorialspoint.com/c_standard_library/c_function_fwrite.htm) to write byte values directly into opened file (but don't forget to open it with binary file, like `FILE *f = fopen("name", "wb");`), the work-around with producing temporary ASM file above is worth the effort only when you actually want the assembler and linker to produce also the common meta-data of common executables, like ELF64, etc...
To prepare such binary data in C you can do for example:
```c
#include
typedef unsigned short word;
void foo() {
word payload[3] = { 0x1D35, 0xC3D5, 0xA29F };
FILE \*f = fopen("temp.exe", "wb");
fwrite(payload, 1, sizeof(payload), f);
fclose(f);
}
```
(do NOT run resulting "exe" file created by this, it is not valid EXE binary to be executed, as it is missing header/meta data required by the DOS or Windows EXE variant files .. this is just example how to write binary data into file with C code).
---
And final note, if you will write pure x86-16 machine opcodes into file named "something.COM", it can be run directly under DOS, as the "COM" executable files format is "raw machine code loaded into single 64k segment of memory starting at offset 0x100", i.e. writing single byte `0xC3` into "test.com" will execute under DOS correctly (just returning back into DOS, because `0xC3` is `ret` instruction opcode).
But for most of the other target platforms you will have to produce much more complex executable files containing several meta-data in the properly structured header of the file, to make them valid executables. That's another reason why using assembler+linker is convenient when writing assembly code, not only the translation from text form into machine code, as the assembler+linker when targetting particular executable format will automatically produce all those header/meta data for you.
Upvotes: 2 <issue_comment>username_2: ```
_start:
mov $1, %rax # write
lea .foo, %rsi # text
mov $6, %rdx # text size
mov $1, %rdi # stdout
syscall
mov $60, %rax #exit
syscall
.foo: .ascii "Hello\n"
```
here's some assembly code (sorry, that AT&T, that's what I use, you asked for machincode anyway).
```
/tmp> as x.S -o x.o
/tmp> ld x.o -o x
ld: warning: cannot find entry symbol _start; defaulting to 0000000000400078
/tmp> ./x
Hello
```
So that I know it works…
```
/tmp> objdump -d x | awk 'BEGIN{ printf " _start: .byte " } / [0-9a-f]+:/ { i=2; while( $i ~ /^[0-9a-f]{2}$/ ){ printf "0x%s, ", $i; i++ } } END{ print "" }' > y.s
/tmp> cat y.s
_start: .byte 0x48, 0xc7, 0xc0, 0x01, 0x00, 0x00, 0x00, 0x48, 0x8d, 0x34, 0x25, 0xa0, 0x00, 0x40, 0x00, 0x48, 0xc7, 0xc2, 0x06, 0x00, 0x00, 0x00, 0x48, 0xc7, 0xc7, 0x01, 0x00, 0x00, 0x00, 0x0f, 0x05, 0x48, 0xc7, 0xc0, 0x3c, 0x00, 0x00, 0x00, 0x0f, 0x05, 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x0a,
```
That's how I extract machine codes and transform them into assembler readable syntax. And finally:
```
/tmp> as y.s -o y.o
y.s: Assembler messages:
y.s:1: Warning: zero assumed for missing expression
/tmp> ld y.o -o y
ld: warning: cannot find entry symbol _start; defaulting to 0000000000400078
/tmp> ./y
Hello
```
Now do it in C. :)
Upvotes: 2
|
2018/03/16
| 1,889 | 5,537 |
<issue_start>username_0: I'm trying to make a pomodoro clock on codepen, and I know my code isn't perfect yet, but I'm seeing a very strange behavior with my Start button, when I click once on it, it start the timer. But when I smash it multiple times, it just go crazy, and the timer go faster and faster, can someone tell me why and how to avoid it?
[You can just go check the Codepen here](https://codepen.io/Ziratsu/pen/ZxzZpM?editors=1000)
or see my code there :
```js
let pause = "5";
let duree = "25";
let min = "5";
let nb = 0;
let go;
document.getElementById("output").innerHTML = pause;
document.getElementById("output2").innerHTML = duree;
function addOne(btn) {
pause = pause + " + 1";
console.log(pause);
clearTimeout(go);
document.getElementById("output").innerHTML = eval(pause);
pause = eval(pause);
}
function minusOne(btn) {
clearTimeout(go);
if (eval(pause) == 0) {
pause = pause + "";
} else {
pause = pause + " - 1";
}
document.getElementById("output").innerHTML = eval(pause);
pause = eval(pause);
}
function addOne1(btn) {
clearTimeout(go);
min = min + " + 1";
document.getElementById("output2").innerHTML = eval(min);
document.getElementById("min1").innerHTML = eval(min) + ":";
min = eval(min);
}
function minusOne1(btn) {
if (eval(duree) == 0) {
duree = duree + "";
} else {
min = min + " - 1";
}
document.getElementById("output2").innerHTML = eval(min);
document.getElementById("min1").innerHTML = eval(min) + ":";
clearTimeout(go);
min = eval(min);
}
document.getElementById("min1").innerHTML = min + ":";
document.getElementById("sec").innerHTML = " " + nb;
function doSomething() {
if (nb > 0) {
nb = nb - 1;
} else if (min > 0) {
nb = 59;
min--;
} else {
min = pause
}
go = setTimeout(doSomething, 1000);
document.getElementById("min1").innerHTML = min + ":";
document.getElementById("sec").innerHTML = nb;
}
```
```css
#damn1 {
padding-left: 50px;
}
#damn {
padding-left: 450px;
}
#conteneur2 {
display: flex;
}
.txt {
color: #ffcf40;
font-size: 45px;
}
.buto {
border-radius: 50px;
background: #ffcf40;
text-decoration: none;
}
#output {
font-size: 100px;
}
#output2 {
font-size: 100px;
}
#bouton {
height: 100px;
width: 100px;
margin-top: 200px;
}
#min1 {
margin-top: 200px;
font-size: 100px;
color: #ffcf40;
}
#sec {
color: #ffcf40;
margin-top: 200px;
font-size: 100px;
}
body {
background: black;
}
#cont1 {
display: flex;
justify-content: center;
}
#conteneur {
margin-top: 50px;
display: flex;
justify-content: center;
}
.el {
width: 100px;
height: 100px;
color: #ffcf40;
}
.el1 {
width: 100px;
height: 100px;
color: #ffcf40;
}
#title {
margin: 0 auto;
text-align: center;
}
```
```html
POMODORO
Pause
Length of sessions
```
Thanks if someone can enlighten me.<issue_comment>username_1: You have to either clear the timeout or not set it again for every click.
I am showing you the later one.
Html:
```
POMODORO
Pause
Length of sessions
```
js:
```
let pause = "5";
let duree = "25";
let min = "5";
let nb = 0;
let go;
document.getElementById("output").innerHTML = pause;
document.getElementById("output2").innerHTML = duree;
function addOne(btn) {
pause = pause + " + 1";
console.log(pause);
clearTimeout(go);
document.getElementById("output").innerHTML = eval(pause);
pause = eval(pause);
}
function minusOne(btn) {
clearTimeout(go);
if(eval(pause) == 0){
pause = pause + "";
}
else{
pause = pause + " - 1";
}
document.getElementById("output").innerHTML = eval(pause);
pause = eval(pause);
}
function addOne1(btn) {
clearTimeout(go);
min = min + " + 1";
document.getElementById("output2").innerHTML = eval(min);
document.getElementById("min1").innerHTML = eval(min)+":";
min = eval(min);
}
function minusOne1(btn) {
if(eval(duree) == 0){
duree = duree + "";
}
else{
min = min + " - 1";
}
document.getElementById("output2").innerHTML = eval(min);
document.getElementById("min1").innerHTML = eval(min)+ ":";
clearTimeout(go);
min = eval(min);
}
document.getElementById("min1").innerHTML = min + ":";
document.getElementById("sec").innerHTML = " " + nb;
function doSomethingElse(){
if(typeof(go) !== 'undefined'){
return;
}
doSomething();
}
function doSomething() {
if(nb > 0) {
nb = nb - 1;
}
else if (min > 0) {
nb = 59;
min--;
}
else {
min = pause
}
go = setTimeout(doSomething, 1000);
document.getElementById("min1").innerHTML = min + ":";
document.getElementById("sec").innerHTML = nb;
}
```
Upvotes: 0 <issue_comment>username_2: You also need to clear your timeout in your doSomthing function:
```js
function doSomething() {
if (nb > 0) {
nb = nb - 1;
} else if (min > 0) {
nb = 59;
min--;
} else {
min = pause
}
clearTimeout(go);
go = setTimeout(doSomething, 1000);
document.getElementById("min1").innerHTML = min + ":";
document.getElementById("sec").innerHTML = nb;
}
```
[Updated Pen](https://codepen.io/anon/pen/EEyJYY)
Upvotes: 2 [selected_answer]<issue_comment>username_3: Actually, you are not clearing the previous Timeout on click of `Start` button.
Just clear it on click of it.
Add it in the inline JS.
```
```
Upvotes: 0
|
2018/03/16
| 486 | 1,704 |
<issue_start>username_0: I am creating a website with ruby on rails and want to add a post section.
I've looked StackOverflow up for solutions, but I couldn't get one for my needs.
show.html.erb
```
<%= @post.title %>
==================
by
[Start Bootstrap](#)
```
---
posts\_controller.rb
```
class PostController < ApplicationController
def index
@posts = Post.all.order('created_at DESC')
end
def create
@post = Post.new(params[:post].permit(:title, :text))
if @post.save
redirect_to @post
else
render 'new'
end
end
def show
@posts = Post.find(params[:id])
@posts = Post.order("created_at DESC").limit(4).offset(1)
end
def edit
@post = Post.find(params[:id])
end
def update
@post = Post.find(params[:id])
if @post.update(params[:post].permit(:title, :text))
redirect_to @post
else
render 'edit'
end
end
def destroy
@post = Post.find(params[:id])
@post.destroy
redirect_to posts_path
end
private
def post_params
params.require(:post).permit(:title, :body)
end
```
I am new to rails and I don't really know how to fix this...
hope you guys can help me with this.
Thanks in advance for help and interest!<issue_comment>username_1: Your show action declare @posts with an s, the show view use @post without s.
Upvotes: 1 <issue_comment>username_2: In `show.html.erb` file you use `@post` variable, but it is not defined in your `show` controller. This is the reason why you have error `Undefined method 'title' for nil:NilClass post.title`.
Instead you have `@posts` variable. By convention you should name this variable in singular form for show action.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,046 | 5,055 |
<issue_start>username_0: I try to display a showcase view that pointing to a menu item, but this sharedpreference just save before my app closed, after i close my app and open again showcase view will appear again. how i can show the showcase view only for the very first time?
```
@Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.menu_contact, menu);
pref = getSharedPreferences(String.valueOf(getApplicationContext()), Context.MODE_PRIVATE);
editor = pref.edit();
if (pref.getBoolean("isFirstTime", true)) { // default true for first time
editor.putBoolean("isFirstTime", false).commit(); // update so it will false ever after
new Handler().postDelayed(
new Runnable() {
@Override
public void run() {
mFancyShowCaseView = new FancyShowCaseView.Builder(ContactTabActivity.this)
.focusOn(findViewById(R.id.item_sync)) // ActionBar menu item id
.focusCircleRadiusFactor(1.5)
.focusBorderSize(15)
.focusBorderColor(Color.parseColor("#FFA64D"))
.customView(R.layout.case_view_sync, new OnViewInflateListener() {
@Override
public void onViewInflated(@NonNull View view) {
view.findViewById(R.id.btnOke).setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
mFancyShowCaseView.removeView();
}
});
}
}).closeOnTouch(false)
.build();
mFancyShowCaseView.show();
}
}, 50
);
}
editor.commit();
return super.onCreateOptionsMenu(menu);
}
```<issue_comment>username_1: *Instead* of
```
pref = getSharedPreferences(String.valueOf(getApplicationContext()), Context.MODE_PRIVATE);
```
Use
```
pref = getSharedPreferences("First_tym_check",Context.MODE_PRIVATE);
```
This is because :-
`String.valueOf(getApplicationContext())` this string value is *not* **CONSTANT**
If you restart the app, you will notice the value will change significantly like for instance :-
* `(yourPackageName).@521c1` , **Next time** ,
* `(yourPackageName).@631d1` ,
>
> So, the string value is different *every* time .
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: use below code. Use apply() instead commit() and getApplicationContext().getPackageName() instead getApplicationContext()
```
@Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.menu_contact, menu);
pref = getSharedPreferences(String.valueOf(getApplicationContext().getPackageName()),
Context.MODE_PRIVATE);
if (pref.getBoolean("isFirstTime", true)) { // default true for first time
editor = pref.edit();
editor.putBoolean("isFirstTime", false).apply(); // update so it will false ever after
new Handler().postDelayed(
new Runnable() {
@Override
public void run() {
mFancyShowCaseView = new FancyShowCaseView.Builder(ContactTabActivity.this)
.focusOn(findViewById(R.id.item_sync)) // ActionBar menu item id
.focusCircleRadiusFactor(1.5)
.focusBorderSize(15)
.focusBorderColor(Color.parseColor("#FFA64D"))
.customView(R.layout.case_view_sync, new OnViewInflateListener() {
@Override
public void onViewInflated(@NonNull View view) {
view.findViewById(R.id.btnOke).setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
mFancyShowCaseView.removeView();
}
});
}
}).closeOnTouch(false)
.build();
mFancyShowCaseView.show();
}
}, 50
);
}
return super.onCreateOptionsMenu(menu);
}
```
Upvotes: 0 <issue_comment>username_3: Immediately your case values committed the sharedpreferences, Please refer this link : [what is the Best way to use shared preferences between activities](https://stackoverflow.com/questions/4051875/what-is-the-best-way-to-use-shared-preferences-between-activities)
Upvotes: 0
|
2018/03/16
| 1,064 | 3,622 |
<issue_start>username_0: For my project i'm trying to binarize an image with openCV in python. I used the adaptive gaussian thresholding from openCV to convert the image with the following result:
[](https://i.stack.imgur.com/uEoWG.png)
I want to use the binary image for OCR but it's too noisy. Is there any way to remove the noise from the binary image in python? I already tried fastNlMeansDenoising from openCV but it doesn't make a difference.
P.S better options for binarization are welcome as well<issue_comment>username_1: You could try the morphological transformation close to remove small "holes".
First define a kernel using numpy, you might need to play around with the size. Choose the size of the kernel as big as your noise.
```
kernel = np.ones((5,5),np.uint8)
```
Then run the morphologyEx using the kernel.
```
denoised = cv2.morphologyEx(image, cv2.MORPH_CLOSE, kernel)
```
If text gets removed you can try to erode the image, this will "grow" the black pixels. If the noise is as big as the data, this method will not help.
```
erosion = cv2.erode(img,kernel,iterations = 1)
```
Upvotes: 0 <issue_comment>username_2: It is also possible using GraphCuts for this kind of task. You will need to install the [maxflow](http://pmneila.github.io/PyMaxflow/) library in order to run the code. I quickly copied the code from their [tutorial](http://pmneila.github.io/PyMaxflow/tutorial.html#tutorial) and modified it, so you could run it more easily. Just play around with the smoothing parameter to increase or decrease the denoising of the image.
```
import cv2
import numpy as np
import matplotlib.pyplot as plt
import maxflow
# Important parameter
# Higher values means making the image smoother
smoothing = 110
# Load the image and convert it to grayscale image
image_path = 'your_image.png'
img = cv2.imread('image_path')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = 255 * (img > 128).astype(np.uint8)
# Create the graph.
g = maxflow.Graph[int]()
# Add the nodes. nodeids has the identifiers of the nodes in the grid.
nodeids = g.add_grid_nodes(img.shape)
# Add non-terminal edges with the same capacity.
g.add_grid_edges(nodeids, smoothing)
# Add the terminal edges. The image pixels are the capacities
# of the edges from the source node. The inverted image pixels
# are the capacities of the edges to the sink node.
g.add_grid_tedges(nodeids, img, 255-img)
# Find the maximum flow.
g.maxflow()
# Get the segments of the nodes in the grid.
sgm = g.get_grid_segments(nodeids)
# The labels should be 1 where sgm is False and 0 otherwise.
img_denoised = np.logical_not(sgm).astype(np.uint8) * 255
# Show the result.
plt.subplot(121)
plt.imshow(img, cmap='gray')
plt.title('Binary image')
plt.subplot(122)
plt.title('Denoised binary image')
plt.imshow(img_denoised, cmap='gray')
plt.show()
# Save denoised image
cv2.imwrite('img_denoised.png', img_denoised)
```
[Result](https://i.stack.imgur.com/PXya8.png)
Upvotes: 2 <issue_comment>username_3: You should start by adjusting the parameters to the adaptive threshold so it uses a larger area. That way it won't be segmenting out noise. Whenever your output image has more noise than the input image, you know you're doing something wrong.
I suggest as an adaptive threshold to use a closing (on the input grey-value image) with a structuring element just large enough to remove all the text. The difference between this result and the input image is exactly all the text. You can then apply a regular threshold to this difference.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 613 | 2,170 |
<issue_start>username_0: When the JSON format is response, the original sorting of the data is lost. When response in XML, the sort is saved. How can I preserve the original sorting with JSON?
My controller:
```
use yii\rest\ActiveController;
class DomainController extends ActiveController
{
...
public function behaviors()
{
$behaviors = parent::behaviors();
$behaviors['corsFilter' ] = [
'class' => \yii\filters\Cors::className(),
];
$behaviors['contentNegotiator'] = [
'class' => \yii\filters\ContentNegotiator::className(),
'formats' => [
'application/json' => \yii\web\Response::FORMAT_JSON,
],
];
return $behaviors;
}
```
And action in the controller:
```
public function actionIndex()
{
$domains = Domain::find()
->leftJoin('WEB_DOMAIN_PRIORITY', 'WEB_DOMAIN_PRIORITY.id = WEB_DOMAIN.priority_id')
->orderBy(['priority' => SORT_DESC])->all();
$test = [];
foreach ($domains as $domain) {
$test[$domain->id] = $domain->title;
}
//echo "
```
"; print_r($test);die; < -- its ok. right sort
//return $test; < -- its wrong. sort is changed
}
```
```
And if i change in behavior this:
```
'application/json' => \yii\web\Response::FORMAT_JSON,
```
To:
```
'application/json' => \yii\web\Response::FORMAT_XML,
```
I have xml response with right sort.
Only JSON response sorting my array by array keys(ASC).<issue_comment>username_1: Here
```
$test[$domain->id] = $domain->title;
```
you are adding new array keys. This could change order based on rest/Serializer.
You could apply preserveKeys to serializer as here <http://www.yiiframework.com/doc-2.0/yii-rest-serializer.html#>$preserveKeys-detail
or don't change keys order.
Upvotes: 1 <issue_comment>username_2: @username_1 not exactly. In this moment response preparing [JsonResponseFormatter](https://github.com/yiisoft/yii2/blob/master/framework/web/JsonResponseFormatter.php#L90) and it use [yii\helpers\Json::encode](https://github.com/yiisoft/yii2/blob/master/framework/helpers/Json.php) for format data.
Upvotes: 0
|
2018/03/16
| 574 | 1,849 |
<issue_start>username_0: I have a table that records daily sales data. However, there are days when no sale is made and hence there is no record on the database for those dates. Is it possible to extract data out from the table that returns null for these dates when no sale was made

Referring to the image attached, it is seen there is no sales done on Jan 4 and Jan 8. I would like to write a SQL query that would return all dates from Jan 1 - Jan 10 but for Jan 4 and Jan 8, it should return 0 since there is no row for those dates (no sale done)
My date starts from Mar 1, 2018 and should go on for the next few quarters.<issue_comment>username_1: `CTE` is also an alternative here,
```
DECLARE @FDATE DATE = '2018-01-01'
,@TDATE DATE = '2018-01-10'
;WITH CTE_DATE
AS (
SELECT @FDATE AS CDATE
UNION ALL
SELECT DATEADD(DAY,1,CDATE)
FROM CTE_DATE
WHERE DATEADD(DAY,1,CDATE) <= @TDATE
)
SELECT C.CDATE AS [DATE],COUNT(*) AS [COUNT]
FROM CTE_DATE AS C
LEFT OUTER JOIN [MY_TABLE] AS M ON C.CDATE = M.[DATE] --*[your table here]*
GROUP BY C.CDATE
OPTION ( MAXRECURSION 0 );
```
Upvotes: -1 <issue_comment>username_2: Yes. In Postgres, you can use `generate_series()` to generate dates or numbers within a range.
Then, you can use a `cross join` to generate the rows and then a `left join` to bring in the data:
```
select s.seller, gs.dte, t.count
from (select generate_series(mindate::timestamp, maxdate::timestamp, interval '1 day')::date
from (select min(date) as mindate, max(date) as maxdate
from t
) x
) gs(dte) cross join
(select distinct seller from t) s left join
t
on t.date = gs.dte and t.seller = s.seller
```
Upvotes: 1
|
2018/03/16
| 513 | 1,657 |
<issue_start>username_0: I have two repositories with a similar structure, but no common commits. I've added them as remotes of each other to be able to [`cherry-pick`](https://mirrors.edge.kernel.org/pub/software/scm/git/docs/git-cherry-pick.html) commits between them.
Recently, I've noticed that git correctly applies the commit's changes even if the files' paths in the repositories differ (and the files themselves differ). How does `git` find the file(s) to apply the changes to? Does it look through *all* the files in the current snapshot?<issue_comment>username_1: `CTE` is also an alternative here,
```
DECLARE @FDATE DATE = '2018-01-01'
,@TDATE DATE = '2018-01-10'
;WITH CTE_DATE
AS (
SELECT @FDATE AS CDATE
UNION ALL
SELECT DATEADD(DAY,1,CDATE)
FROM CTE_DATE
WHERE DATEADD(DAY,1,CDATE) <= @TDATE
)
SELECT C.CDATE AS [DATE],COUNT(*) AS [COUNT]
FROM CTE_DATE AS C
LEFT OUTER JOIN [MY_TABLE] AS M ON C.CDATE = M.[DATE] --*[your table here]*
GROUP BY C.CDATE
OPTION ( MAXRECURSION 0 );
```
Upvotes: -1 <issue_comment>username_2: Yes. In Postgres, you can use `generate_series()` to generate dates or numbers within a range.
Then, you can use a `cross join` to generate the rows and then a `left join` to bring in the data:
```
select s.seller, gs.dte, t.count
from (select generate_series(mindate::timestamp, maxdate::timestamp, interval '1 day')::date
from (select min(date) as mindate, max(date) as maxdate
from t
) x
) gs(dte) cross join
(select distinct seller from t) s left join
t
on t.date = gs.dte and t.seller = s.seller
```
Upvotes: 1
|
2018/03/16
| 1,165 | 3,802 |
<issue_start>username_0: I have an SQL Query which returns the following table with 3 columns:
[](https://i.stack.imgur.com/wAEsU.png)
I am searching a data structure, in which I can store the table, that delivers me back the phasename when the dot is between.
I.e. 300 returns Project Management, 360 returns Controlling.
"Project Management"=givePhaseNameFor (300);
"Controlling"=givePhaseNameFor (360);
In SQL I would write
```
WHERE point IS BETWEEN y1 AND y2
public string givePhaseNameFor(int point)
{
return "Project Management
}
```
Is there a data structure (like a Dictionary) for this?<issue_comment>username_1: ```
class Row
{
public Y1 {get;set;}
public Y2 {get;set;}
public Name {get;set;}
}
var list = new List();
//fill list
var result = list.Where(o => o.Y1 <= x && x < o.Y2).First();
```
Upvotes: 1 <issue_comment>username_2: No there is not, but you could use something like this:
```
public class RangeList
{
private List list = new List();
public abstract int GetLower(T item);
public abstract int GetUpper(T item);
public T this[int index]
{
get
{
return list.SingleOrDefault(x => index >= GetLower(x) && index <= GetUpper(x));
}
}
public void Add(T item)
{
if (list.Any(x => GetUpper(item) >= GetLower(x) && GetLower(item) <= GetUpper(x)))
throw new Exception("Attempt to add item with overlapping range");
}
}
public class Phase
{
public int y1;
public int y2;
public string phasename;
}
public class PhaseList : RangeList
{
public override int GetLower(Phase item)
{
return item.y1;
}
public override int GetUpper(Phase item)
{
return item.y1;
}
}
```
Usage:
```
PhaseList phaseList = GetPhaseListFromDB();
return phaseList[300]?.phasename;
```
---
EDIT: An alternate to this would be to create an interface that classes useable in a 'RangeList' must implement. This will mean you don't need a separate inherited list class for each type you want to use.
```
public interface IRangeable
{
int Lower { get; }
int Upper { get; }
}
public class Phase : IRangeable
{
public int y1;
public int y2;
public string phasename;
int IRangeable.Lower => y1;
int IRangeable.Upper => y2;
}
public class RangeList where T : IRangeable
{
private List list = new List();
public T this[int index]
{
get
{
return list.SingleOrDefault(x => index >= ((IRangeable)x).Lower && index <= ((IRangeable)x).Upper);
}
}
public void Add(T item)
{
if (list.Any(x => ((IRangeable)item).Higher >= ((IRangeable)x).Lower && ((IRangeable)item).Lower <= ((IRangeable)x).Upper))
throw new Exception("Attempt to add item with overlapping range");
}
}
```
Usage:
```
RangeList phaseList = GetPhaseListFromDB();
return phaseList[300]?.phasename;
```
Upvotes: 2 <issue_comment>username_3: if you want speed, I would make two arrays:
* One containing the start ranges
* Another containing the names
Then you would first populate the arrays:
```
int nbRanges = //Get this info from the number of rows in you table
int[] range = new int[nbRanges];
string[] rangeNames = new string[nbRanges];
for (int i = 0; i < nbRanges; i++)
{
range[i] = //Table.y1;
rangeNames[i] = //Table.phasename;
}
```
After that, to look for the name is straightforward :
```
public string givePhaseNameFor(int point)
{
int index = 0;
while (index + 1 < nbRanges)
{
if (point <= range[index + 1])
{
//This means that your point is in the current range
break;
}
else
{
index++; //The point is not in this range
}
}
//If we exit without having it found, index == nbRanges and we return the last value
return rangeNames[index];
}
```
Upvotes: 0
|
2018/03/16
| 1,225 | 4,128 |
<issue_start>username_0: I am working at my university degree and I got stuck at a random function.
I am using a microcontroller, which has no configured clock. So, I decided to use the ADC (analog to digital conversion) as seeds for my random function.
So I have 15 two bytes variables with stores some 'random' values ( the conversion is not always the same, and the difference is at the LSB ( the last bit in my case :eg now the value of an adc read is 700, in 5ms it is 701, then back to 700, then 702 etc). So, I was thinking to build a random function with use the last 4 bits lets say from those variables.
My question is: Can you give me an example of a good random formula?
Like `( Variable1 >> 4 ) ^ ( Variable2 << 4 )` and so on ...
I want to be able to obtain a pretty random number on 1 byte ( this is the best case ). It will be used in a RSA algorithm, which I have already implemented ( I have a big look up table with prime numbers, and I need 2 random numbers from that table ).<issue_comment>username_1: ```
class Row
{
public Y1 {get;set;}
public Y2 {get;set;}
public Name {get;set;}
}
var list = new List();
//fill list
var result = list.Where(o => o.Y1 <= x && x < o.Y2).First();
```
Upvotes: 1 <issue_comment>username_2: No there is not, but you could use something like this:
```
public class RangeList
{
private List list = new List();
public abstract int GetLower(T item);
public abstract int GetUpper(T item);
public T this[int index]
{
get
{
return list.SingleOrDefault(x => index >= GetLower(x) && index <= GetUpper(x));
}
}
public void Add(T item)
{
if (list.Any(x => GetUpper(item) >= GetLower(x) && GetLower(item) <= GetUpper(x)))
throw new Exception("Attempt to add item with overlapping range");
}
}
public class Phase
{
public int y1;
public int y2;
public string phasename;
}
public class PhaseList : RangeList
{
public override int GetLower(Phase item)
{
return item.y1;
}
public override int GetUpper(Phase item)
{
return item.y1;
}
}
```
Usage:
```
PhaseList phaseList = GetPhaseListFromDB();
return phaseList[300]?.phasename;
```
---
EDIT: An alternate to this would be to create an interface that classes useable in a 'RangeList' must implement. This will mean you don't need a separate inherited list class for each type you want to use.
```
public interface IRangeable
{
int Lower { get; }
int Upper { get; }
}
public class Phase : IRangeable
{
public int y1;
public int y2;
public string phasename;
int IRangeable.Lower => y1;
int IRangeable.Upper => y2;
}
public class RangeList where T : IRangeable
{
private List list = new List();
public T this[int index]
{
get
{
return list.SingleOrDefault(x => index >= ((IRangeable)x).Lower && index <= ((IRangeable)x).Upper);
}
}
public void Add(T item)
{
if (list.Any(x => ((IRangeable)item).Higher >= ((IRangeable)x).Lower && ((IRangeable)item).Lower <= ((IRangeable)x).Upper))
throw new Exception("Attempt to add item with overlapping range");
}
}
```
Usage:
```
RangeList phaseList = GetPhaseListFromDB();
return phaseList[300]?.phasename;
```
Upvotes: 2 <issue_comment>username_3: if you want speed, I would make two arrays:
* One containing the start ranges
* Another containing the names
Then you would first populate the arrays:
```
int nbRanges = //Get this info from the number of rows in you table
int[] range = new int[nbRanges];
string[] rangeNames = new string[nbRanges];
for (int i = 0; i < nbRanges; i++)
{
range[i] = //Table.y1;
rangeNames[i] = //Table.phasename;
}
```
After that, to look for the name is straightforward :
```
public string givePhaseNameFor(int point)
{
int index = 0;
while (index + 1 < nbRanges)
{
if (point <= range[index + 1])
{
//This means that your point is in the current range
break;
}
else
{
index++; //The point is not in this range
}
}
//If we exit without having it found, index == nbRanges and we return the last value
return rangeNames[index];
}
```
Upvotes: 0
|
2018/03/16
| 590 | 1,263 |
<issue_start>username_0: Assuming I would have a raster in R like :
```
r <- raster(ncols=10, nrows=10)
r[] <- sample(50, 100, replace=T)
```
How could I set the values between [10,30] for example to NA?
I tried `values = r[r<= c(10,30)] = NA` but this does not remove the values between 10 and 30.
Thanks in advance.<issue_comment>username_1: You have to find the values like this:
```
r[r>10 & r<30]<-NA
r[]
[1] NA 38 NA 1 NA NA 3 32 40 36 NA NA 30 5 2 38 47 NA 42 42 1 NA NA 32 43 NA 7 NA 8 35 NA NA NA 48 10
[36] 32 49 33 NA 48 NA 37 2 45 9 7 37 42 2 42 NA 3 49 48 NA NA 48 5 NA 46 43 NA NA NA 42 39 41 NA 48 NA
[71] NA NA 30 35 32 46 4 32 NA 48 40 2 44 45 NA NA NA 9 47 NA NA 41 2 4 42 30 NA 36 48 32
```
Upvotes: 1 <issue_comment>username_2: Using
```
r[r <= 10:30] <- NA
```
seems dangerous, and in fact it gives a warning:
>
> Warning message:
> In getValues(e1) <= e2 :
> longer object length is not a multiple of shorter object length
>
>
>
It is also not memory-friendly for large rasters.
The proper/best way to do this is to use `raster::reclassify`:
```
r <- raster::reclassify(r, c(10, 30, NA))
```
, which is also much faster.
See also:
<https://stackoverflow.com/a/49159943/6871135>
Upvotes: 2
|
2018/03/16
| 1,095 | 3,403 |
<issue_start>username_0: A few notes that make this tricky are that I'm using `c9.io` (developing in the cloud) so I use the gem `webdrivers` to be able to run **Chrome** with **Watir**, instead of creating an executable path to the Chrome installed on my device.
My code was working until I logged in today and got the error
>
> session not created exception: Chrome version must be >= 64.0.3282.0 (Driver info: chromedriver=2.37.543610 (afd36256570660b5a2f0e4dbd1b040f3dcfe9cb5),platform=Linux 4.9.80-c9 x86\_64)
>
>
>
Relevant parts of gemfile (everything else is stock)
```
gem 'webdrivers'
gem 'watir'
```
Code I'm trying to compile
```
def mastersave
require 'watir'
@browser = Watir::Browser.new :chrome, headless: true
end
```
I'm not stuck on the idea of using Chrome, but it's what was working for me. The gem 'webdrivers' also allows me to use firefox, but I get the error 'permission denied' with that.<issue_comment>username_1: The following error `Chrome version must be >= 64.0.3282.0`, says it is **NOT compatible** with current chromedriver version 2.37
Updating chrome to latest version should solve the issue.
please refer to <https://sites.google.com/a/chromium.org/chromedriver/downloads>.
>
> Latest Release: ChromeDriver 2.36
>
>
> Supports Chrome v63-65
>
>
>
Upvotes: 2 <issue_comment>username_2: This error message…
>
> session not created exception: Chrome version must be >= 64.0.3282.0 (Driver info: chromedriver=2.37.543610 (afd36256570660b5a2f0e4dbd1b040f3dcfe9cb5),platform=Linux 4.9.80-c9 x86\_64)
>
>
>
…implies that **Chrome version must be >= 64.0**
Your main issue is the **version compatibility** between the binaries you are using as follows:
* You are using *chromedriver=2.37* which is still not **GA** hence we are not sure about the dependencies but from the error message its clear it won't be supporting *Chrome v64.x*
Solution
--------
* To be safer you can:
+ Either downgrade to *chromedriver=2.36* where the *Release Notes* mentions **Supports Chrome v63-65**
+ Or upgrade to *chrome=65.x*
* *Clean* your *Project Workspace* through your *IDE* and *Rebuild* your project with required dependencies only.
* Use [*CCleaner*](https://www.ccleaner.com/ccleaner) tool to wipe off all the OS chores before and after the execution of your *test Suite*.
* If your base *Chrome* version is too old, then uninstall it through [*Revo Uninstaller*](https://www.revouninstaller.com/revo_uninstaller_free_download.html) and install a recent GA and released version of Chrome.
* Execute your `@Test`.
Upvotes: 4 [selected_answer]<issue_comment>username_3: For those that have just had this problem going from Chrome 73 to 74 it might be because you are using `chromedriver-helper`. Apparently the `chromedriver-helper` gem was deprecated. I switched to using the `webdrivers` gem and it fixed the issue.
In Gemfile replace:
`gem 'chromedriver-helper'`
With:
`gem 'webdrivers', '~> 3.0'`
`bundle install`
Upvotes: 3 <issue_comment>username_4: **Updating the Google Chrome version to 74 worked for me.**
**Steps:** 1. Go to Help -> About Google Chrome -> Chrome will automatically look for updates(update Chrome to the latest version)
[](https://i.stack.imgur.com/cxMiT.png)
**Also, note that you should be having an updated chrome driver.**
Upvotes: 1
|
2018/03/16
| 1,250 | 4,919 |
<issue_start>username_0: in android i am trying to change the layout base on who the message is from.
if the message is from me then display the layout mymessage.xml to the right else display message.xml to the left.
i used if condition, but i don't know how to display one layout to the right and the second to the left,
```
@Override
public View getView(int position, View convertView, ViewGroup parent) {
// TODO
View messageView = null;
// Get a reference to the LayoutInflater. This helps construct the
// view from the layout file
LayoutInflater inflater = (LayoutInflater) context
.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
// Change the layout based on who the message is from
if (messages.get(position).fromMe()) {
messageView = inflater.inflate(R.layout.mymessage , parent, false);
//initialization of 2 textView, message and time
TextView timeView = (TextView) messageView.findViewById(R.id.mytimeTextView);
timeView.setText(messages.get(position).getTime());
TextView msgView = (TextView) messageView.findViewById(R.id.mymessageTextView);
msgView.setText(messages.get(position).getMessage());
} else {
messageView = inflater.inflate(R.layout.message , parent, true);
//initialization of 2 textView, message and time
TextView timeView = (TextView) messageView.findViewById(R.id.timeTextView);
timeView.setText(messages.get(position).getTime());
TextView msgView = (TextView) messageView.findViewById(R.id.messageTextView);
msgView.setText(messages.get(position).getMessage());
}
return messageView;
}
```<issue_comment>username_1: **For left.xml**
```
```
**For right.xml**
```
```
**just replace this**
// Change the layout based on who the message is from
```
if (messages.get(position).fromMe()) {
messageView = inflater.inflate(R.layout.left , parent, false);
//initialization of 2 textView, message and time
TextView timeView = (TextView) messageView.findViewById(R.id.mytimeTextView);
timeView.setText(messages.get(position).getTime());
TextView msgView = (TextView) messageView.findViewById(R.id.mymessageTextView);
msgView.setText(messages.get(position).getMessage());
} else {
messageView = inflater.inflate(R.layout.right , parent, true);
//initialization of 2 textView, message and time
TextView timeView = (TextView) messageView.findViewById(R.id.timeTextView);
timeView.setText(messages.get(position).getTime());
TextView msgView = (TextView) messageView.findViewById(R.id.messageTextView);
msgView.setText(messages.get(position).getMessage());
}
```
Upvotes: 1 <issue_comment>username_2: try this layout for `mymessage.xml`
```
xml version="1.0" encoding="utf-8"?
```
and similarly this for message.xml
```
xml version="1.0" encoding="utf-8"?
```
Upvotes: 1 <issue_comment>username_3: for message.xml
```
xml version="1.0" encoding="utf-8"?
```
for java class: getview()
```
convertView = inflater.inflate(R.layout.message,parent, false);
if (messages.get(position).fromMe()) {
// for reference receive_ll is GONE and send_ll layout is VISIBLE
} else {
// for reference receive_ll layout is VISIBLE and send_ll layout GONE
}
```
Upvotes: 1 <issue_comment>username_4: Just make it simple .
1. create single layout contain two category
2. set the data's in the fields
3. And `VISIBLE` `GONE` the layout by the type
```
//sender side
//reciver side
```
if the message type is sender means visible the`sender_layout` else visible the `reciver_layout`
Upvotes: 0 <issue_comment>username_5: when you getting myMessage that time show mymessage layout other layout will hide like make below codition ...
```
if (messages.get(position).fromMe()) {
myMessageView = inflater.inflate(R.layout.mymessage , parent, false);
myMessageView.setVisibility(View.VISIBLE);
//initialization of 2 textView, message and time
TextView timeView = (TextView) messageView.findViewById(R.id.mytimeTextView);
timeView.setText(messages.get(position).getTime());
TextView msgView = (TextView) messageView.findViewById(R.id.mymessageTextView);
msgView.setText(messages.get(position).getMessage());
messageView.setVisibility(View.GONE);
} else {
messageView = inflater.inflate(R.layout.message , parent, true);
messageView.setVisibility(View.VISIBLE);
//initialization of 2 textView, message and time
TextView timeView = (TextView) messageView.findViewById(R.id.timeTextView);
timeView.setText(messages.get(position).getTime());
TextView msgView = (TextView) messageView.findViewById(R.id.messageTextView);
msgView.setText(messages.get(position).getMessage());
myMessageView.setVisibility(View.GONE);
}
```
Upvotes: 0
|
2018/03/16
| 406 | 1,291 |
<issue_start>username_0: How can I create a regex that look for text `for` followed by some text and then the text `{` followed with some text and then the text `DAO.` followed with some text and then the text `}`, for example:
```
for(Entity e : list){
e.setX(someDAO.findX(e.getId()));
}
```
Or :
```
for(Entity e : list){
if(condition){
someDAO.op(e.getId());
}
}
```
This one didn't work for me :
```
\bfor\b(?s).*?DAO\.
```
But this will match any `DAO.` after a for as well as inside it, I have to be sure that I'm inside the for loop.
How can I solve this ?<issue_comment>username_1: `\bfor\b.*\{.*DAO\..*\}` with `s`-flag. So in JavaScript it would be: `/\bfor\b.*\{.*DAO\..*\}/s`
Or without `s`-flag:
`\bfor\b.*(\{.*[\r\n]*.*)+DAO\.(.*[\r\n]*.*\})+`
Upvotes: 0 <issue_comment>username_2: You need to make sure that the number of opening '{' is bigger than the number of closing ones '}' so that exactly one more '{' exists after the for than there are '}'.
Here is a similar question: [Regular Expression to match outer brackets](https://stackoverflow.com/questions/546433/regular-expression-to-match-outer-brackets)
The chosen answer states that this is not a regex task but there are commenters and other answers who disagree. Good luck.
Upvotes: 1
|
2018/03/16
| 2,566 | 6,635 |
<issue_start>username_0: Inside a docker container I am trying to convert an XLSX file to PDF using LibreOffice. The relevant command works on the command line but fails with "Application Error" when called from R. I use this `Dockerfile` which adds some (in my experience arbitrary) XLSX file:
```
FROM rocker/r-ver:3.4.3
RUN apt-get update \
&& apt-get install --yes --no-install-recommends \
default-jre-headless libreoffice-calc \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& echo /usr/lib/libreoffice/program > /etc/ld.so.conf.d/libreoffice.conf \
&& ldconfig
COPY foo.xlsx /tmp
```
(The trick with `ldconfig` comes from
[shared library issue with the system function in R](https://stackoverflow.com/questions/42044342/shared-library-issue-with-the-system-function-in-r).)
On the command line I can then convert the XLSX file to PDF:
```
root@b395caeba33b:/# loffice --headless --convert-to pdf /tmp/foo.xlsx
convert /tmp/foo.xlsx -> //foo.pdf using filter : calc_pdf_Export
```
However, this fails from R:
```
> system("loffice --version")
LibreOffice 5.2.7.2 20m0(Build:2)
> system("loffice --headless --convert-to pdf /tmp/foo.xlsx")
convert /tmp/foo.xlsx -> //foo.pdf using filter : calc_pdf_Export
Application Error
```
If I change the base image from `rocker/r-ver:3.4.3` to `rocker/r-base` which uses R 3.4.4 and Debian testing/sid the result changes only marginally:
```
> system("loffice --version")
LibreOffice 6.0.2.1.0 00m0(Build:1)
> system("loffice --headless --convert-to pdf /tmp/foo.xlsx")
Application Error
```
How can I get LibreOffice to convert XLSX files to PDF when called from R?<issue_comment>username_1: I have found a workaround, but I am still interested in a proper explanation. Here is what I found:
* Start the docker container with option `--security-opt seccomp:unconfined` and install `strace`.
* Within `R` call
```
system("strace -f -o R.trace loffice --headless --convert-to pdf /tmp/foo.xlsx")
```
* The resulting trace file shows an error loading `libsal_textenclo.so`. It is strange that it searches for the library in `/usr/lib/x86_64-linux-gnu` even though `ldconfig` knows where to find it:
```
root@1519f52c05e0:/# grep libsal R.trace
257 open("/usr/lib/x86_64-linux-gnu/libsal_textenclo.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
root@1519f52c05e0:/# ldconfig -p | grep libsal
libsal_textenclo.so (libc6,x86-64) => /usr/lib/libreoffice/program/libsal_textenclo.so
```
* Setting `LD_LIBRARY_PATH` to include `/usr/lib/libreoffice/program` does **not** solve the issue.
```
root@4a235dfa08e3:~# export LD_LIBRARY_PATH=/usr/lib/libreoffice/program
root@4a235dfa08e3:~# Rscript -e 'system("loffice --headless --convert-to pdf /tmp/foo.xlsx")'
Application Error
```
* My Current workaround is to set `LD_LIBRARY_PATH` within the R session:
```
> Sys.setenv(LD_LIBRARY_PATH="/usr/lib/libreoffice/program")
> system("loffice --headless --convert-to pdf /tmp/foo.xlsx")
convert /tmp/foo.xlsx -> //foo.pdf using filter : calc_pdf_Export
Overwriting: //foo.pdf
```
Upvotes: 1 <issue_comment>username_2: The issue happens because of the environment difference. When you run the `env` command through `system`
```
> system('env')
R_UNZIPCMD=/usr/bin/unzip
HOSTNAME=da4d504ddcb1
LD_LIBRARY_PATH=/usr/local/lib/R/lib:/usr/local/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server
SHLVL=0
HOME=/root
R_LIBS_SITE=
R_BROWSER=xdg-open
PAGER=/usr/bin/pager
R_VERSION=3.4.3
BUILD_DATE=
R_SYSTEM_ABI=linux,gcc,gxx,gfortran,?
TAR=/bin/tar
R_LIBS_USER=/usr/local/lib/R/site-library
TERM=xterm
COLUMNS=200
R_ARCH=
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
R_BZIPCMD=/bin/bzip2
R_INCLUDE_DIR=/usr/local/lib/R/include
R_SESSION_TMPDIR=/tmp/RtmpJsaXba
LANG=en_US.UTF-8
R_GZIPCMD=/bin/gzip
SED=/bin/sed
LN_S=ln -s
R_PDFVIEWER=/usr/bin/xdg-open
R_TEXI2DVICMD=/usr/bin/texi2dvi
R_HOME=/usr/local/lib/R
R_PRINTCMD=/usr/bin/lpr
R_DOC_DIR=/usr/local/lib/R/doc
R_LIBS=/usr/local/lib/R/site-library:/usr/local/lib/R/library:/usr/lib/R/library
LC_ALL=en_US.UTF-8
R_SHARE_DIR=/usr/local/lib/R/share
PWD=/
R_ZIPCMD=/usr/bin/zip
R_PLATFORM=x86_64-pc-linux-gnu
R_PAPERSIZE=letter
LINES=50
MAKE=make
R_RD4PDF=times,inconsolata,hyper
EDITOR=vi
```
You can see the default `R` has set of environment variables and one of them is `LD_LIBRARY_PATH`.
```
> system('loffice --headless --convert-to pdf /tmp/foo.xlsx')
Application Error
> system('LD_LIBRARY_PATH= loffice --headless --convert-to pdf /tmp/foo.xlsx')
convert /tmp/foo.xlsx -> //foo.pdf using filter : calc_pdf_Export
```
Blank it out and it works. The reason it works in bash is because the default environment variable set is small
```
root@5c5bbcfcebf2:/# env
LC_ALL=en_US.UTF-8
LANG=en_US.UTF-8
HOSTNAME=5c5bbcfcebf2
PWD=/
HOME=/root
R_VERSION=3.4.3
BUILD_DATE=
TERM=xterm
SHLVL=1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
_=/usr/bin/env
```
Also when you launch `r` instead of `R` from bash
```
root@5c5bbcfcebf2:/# r -i
system('env')
R_UNZIPCMD=/usr/bin/unzip
HOSTNAME=5c5bbcfcebf2
SHLVL=1
R_INSTALL_PKG=littler
HOME=/root
R_ENVIRON=
R_LIBS_SITE=
R_BROWSER=xdg-open
PAGER=/usr/bin/pager
R_VERSION=3.4.3
BUILD_DATE=
R_SYSTEM_ABI=linux,gcc,gxx,gfortran,?
R_PROFILE_USER=
TAR=/bin/tar
_=/usr/local/bin/r
R_LIBS_USER=/usr/local/lib/R/site-library
TERM=xterm
R_ARCH=
R_PAPERSIZE_USER=letter
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
R_BZIPCMD=/bin/bzip2
R_INCLUDE_DIR=/usr/local/lib/R/include
R_SESSION_TMPDIR=/tmp
R_OSTYPE=unix
LANG=en_US.UTF-8
R_CMD=/usr/local/lib/R/bin/Rcmd
R_DEFAULT_PACKAGES=NULL
R_PACKAGE_NAME=littler
R_GZIPCMD=/bin/gzip
LN_S=ln -s
SED=/bin/sed
R_PDFVIEWER=/usr/bin/xdg-open
R_PROFILE=
R_ENVIRON_USER=
R_TEXI2DVICMD=/usr/bin/texi2dvi
R_HOME=/usr/local/lib/R
R_PRINTCMD=/usr/bin/lpr
R_DOC_DIR=/usr/local/lib/R/doc
R_LIBS=/usr/local/lib/R/site-library:/usr/local/lib/R/library:/usr/lib/R/library
LC_ALL=en_US.UTF-8
PWD=/
R_SHARE_DIR=/usr/local/lib/R/share
R_ZIPCMD=/usr/bin/zip
R_PAPERSIZE=letter
R_PLATFORM=x86_64-pc-linux-gnu
MAKE=make
R_RD4PDF=times,inconsolata,hyper
EDITOR=vi
```
In the interactive shell launch from bash using `r -i`
```
system('LD_LIBRARY_PATH= loffice --headless --convert-to pdf /tmp/foo.xlsx')
convert /tmp/foo.xlsx -> //foo.pdf using filter : calc_pdf_Export
system('loffice --headless --convert-to pdf /tmp/foo.xlsx')
convert /tmp/foo.xlsx -> //foo.pdf using filter : calc_pdf_Export
Overwriting: //foo.pdf
```
It is the child process environment inherited from the parent that causes the issue in your case
Upvotes: 2
|
2018/03/16
| 641 | 2,225 |
<issue_start>username_0: I have the classes `SmallClass` and `BigClass`. `BigClass` as a private attribute that is an object of `SmallClass`. I've read about initializer lists, but the problem is that I cannot predict the value I need to pass to the constructor of `SmallClass`. The code looks something like this:
```
class SmallClass {
public:
SmallClass(int);
private:
int boo;
}
SmallClass::SmallClass(int i) {
boo = i;
}
class BigClass {
public:
BigClass();
private:
SmallClass sc; // I guess this calls the constructor (and does not compile)
};
BigClass::BigClass() {
int foo;
/* Do stuff. I end up with a "foo" having a non predictable value. */
sc = SmallClass(foo); /* Is there a way to do something like this? */
}
```<issue_comment>username_1: I suggest a workaround:
```
class SmallClass
{
public:
SmallClass();
public:
void SetBoo(int value);
private:
int boo;
};
SmallClass::SmallClass()
{
boo = 0;
}
void SmallClass::SetBoo(int value)
{
boo = value;
}
class BigClass
{
public:
BigClass();
private:
SmallClass sc; // I guess this calls the constructor (and does not compile)
};
BigClass::BigClass()
{
int foo;
/* Do stuff. I end up with a "foo" having a non predictable value. */
sc.SetBoo(foo); /* Yes, you can set the boo member */
}
```
Upvotes: 1 <issue_comment>username_2: >
> Is there an alternative to initializer lists?
>
>
>
Yes. Default member initializer is an alternative for initializer list. However, that is not useful for your case. In fact, it is only useful when the arguments of the constructor *are* predictable.
"Non-predictability" of `foo` is not a problem for the initializer lits; you don't need an alternative. Simply call a function:
```
int get_foo() {
int foo;
/* Do stuff. I end up with a "foo" having a non predictable value. */
return foo;
}
BigClass::BigClass() : sc(get_foo()) {}
```
If *"do stuff"* includes access to members (there are no other members in the example, I assume that it may be a simplification), then you can use a member function to achieve that. However, keep in mind that you may only access members that were initialized before `sc`.
Upvotes: 2
|
2018/03/16
| 896 | 3,464 |
<issue_start>username_0: I am working on windows application for email sending.
For text formatting i used tinymce editor for email body.
Used tinymce insert image functionality for adding image in email body but when email is sent to user. Images does not appear in user email body.
Then i tried to add base64 image manually as below:
```

```
Which is failed to load images.
Can we use linked resources and alternate view in tiny mce?
How to load images in email body?<issue_comment>username_1: The real question you need to answer is "what is the best way to insert an image in an email". This is a very broad topic that has been answered many times - a little research should lead you to the most common approaches and their pros/cons:
<https://sendgrid.com/blog/embedding-images-emails-facts/>
<https://www.campaignmonitor.com/blog/how-to/2008/08/embedding-images-in-email/>
<https://www.quora.com/Whats-the-best-practices-for-embedding-images-into-HTML-emails>
Upvotes: 2 <issue_comment>username_2: Tiny MCE is just an HTML editor and not a tool which can be used for creating alternate views for email.
Moreover, all email clients don't support inline images (with data URL).
Alternate view is the only way to ensure that all email clients will be able to show your content in the intended manner.
Create a dictionary of linked resources:
```
Dictionary linkedResources = new Dictionary();
linkedResources.Add("img1", byte[]);
```
Create a common method to send email:
```
public bool SendEmail(Dictionary linkedResources)
{
using (SmtpClient mailSender = new SmtpClient("SmtpHost", 22))
{
MailMessage emailMessage = new MailMessage()
{
Subject = "Subject",
SubjectEncoding = Encoding.ASCII,
IsBodyHtml = true,
Body = "Message",
BodyEncoding = Encoding.ASCII,
From = new MailAddress("<EMAIL>")
};
emailMessage.BodyEncoding = Encoding.ASCII;
emailMessage.IsBodyHtml = true;
AlternateView av = AlternateView.CreateAlternateViewFromString(emailMessage.Body, null, MediaTypeNames.Text.Html);
foreach (var item in linkedResources)
{
MemoryStream streamBitmap = new MemoryStream(item.Value);
var linkedResource = new LinkedResource(streamBitmap, MediaTypeNames.Image.Jpeg);
linkedResource.ContentId = item.Key;
av.LinkedResources.Add(linkedResource);
}
emailMessage.AlternateViews.Add(av);
mailSender.Send(emailMessage);
return true;
}
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: there is no best solution for that, you can embed the images or attach them as files or just send them as links.
it depends on your requirements. it actually making image tags and link them to my server to get more analytics like the user had opened the email by loading the image throw a handler and the handler receive logs when the user opened the email and I can know how many users opened that email and so on.
Send grid is great for sending emails and getting analytics out of them because it tracks almost everything related to your email, and you should use Send grid to avoid Spam filters
Upvotes: 0 <issue_comment>username_4: In my experience the only way to solve this is to store that image on the server where it's publicly accessible and then point the img tag src attribute to that url. You have to understand that the HTML that's used in email is severely restricted to regular HTML.
Upvotes: 0
|
2018/03/16
| 636 | 2,497 |
<issue_start>username_0: In a android studio project i need to get user input in edittext. here is my code
**Xml file**
```
```
**in .java file**
```
button.setOnClickListener(new View.OnClickListener() {
EditText amount = (EditText) findViewById(R.id.amount);
String amount_text =amount.getText().toString();
@Override
public void onClick(View v) {
Toast.makeText(betNow.this,amount_text ,Toast.LENGTH_LONG).show();
}
});
```
Everytime it returns empty `Toast`<issue_comment>username_1: You need to Retrieve your `amount_text` from `EditText` inside **`onClick(View v)`** like below code
```
button.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
EditText amount = (EditText) findViewById(R.id.amount);
String amount_text = amount.getText().toString();
Toast.makeText(betNow.this, amount_text, Toast.LENGTH_LONG).show();
}
});
```
Upvotes: 2 <issue_comment>username_2: Don't initialize editText in onClick()
```
EditText amount = (EditText) findViewById(R.id.amount);
button.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
String amount_text =amount.getText().toString();
Toast.makeText(betNow.this,amount_text ,Toast.LENGTH_LONG).show();
}
});
```
Upvotes: 1 <issue_comment>username_3: Find EditText outside of onClickListener
```
EditText amount = (EditText) findViewById(R.id.amount);
button.setOnClickListener(new View.OnClickListener() {
String amount_text =amount.getText().toString();
@Override
public void onClick(View v) {
Toast.makeText(betNow.this,amount_text ,Toast.LENGTH_LONG).show();
}
});
```
Upvotes: 1 <issue_comment>username_4: Your issue occurs when you put getText outside of onClick.
You should put `String amount_text = amount.getText().toString();`
in `onClick` and try again.
Upvotes: 0 <issue_comment>username_5: I get it.... Its not worked earlier because I tried to get value earlier before user click the button, now I change the code like this
```
button.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
final EditText amount = (EditText) findViewById(R.id.amount);
final String amount_text =amount.getText().toString();
Toast.makeText(betNow.this,amount_text ,Toast.LENGTH_LONG).show();
}
});
```
Its working fine
Upvotes: 2
|
2018/03/16
| 406 | 1,094 |
<issue_start>username_0: I have file with only bare data like:
foo.py
```
TL = {
'aa': {'L1104205608': {'ada': {'bb': 'APC', 'ip_addr': '10.44.184.12', 'port': 2}},
'L1104205616': {'ada': {'bb': 'APC', 'ip_addr': '10.44.184.13', 'port': 3}}}}
aaa = 'bbb'
```
I need to import files like this, then unpack values, that are inside it only:
I do use:
```
imp.load_source('foo', r'\foo.py')
```
...and I am stuck here :( How to get an output with 'TL' and 'aaa' with their vlues?<issue_comment>username_1: You realy needed dynamicly load module?
I think you just need
```
import foo
print(foo.TL)
print(foo.aaa)
```
If isn't package. You can do before import
```
import sys
sys.path.append()
```
If you need dynamic import. Probably you can make
```
foo = imp.load_source('foo', r'\foo.py')
print(foo.TL)
print(foo.aaa)
```
Upvotes: 1 <issue_comment>username_2: What I was looking for was vars():
print(var) for var in vars(module)
What is yet unknown to me i s how to reduce depth to 1, (if imported module also imports modules, I don't want to see their vars)
Upvotes: 0
|
2018/03/16
| 325 | 989 |
<issue_start>username_0: How to swap negative sign from last position to first in a string or integer to first position in hive and/ spark?
example: 22-
required: -22
My code is:
```
val Validation1 = spark.sql("Select case when substr(YTTLSVAL-,-1,1)='-' then cast(concat('-',substr(YTTLSVAL-,1,length(YTTLSVAL-)-1)) as int) else cast(YTTLSVAL- as int) end as column_name")
```<issue_comment>username_1: You realy needed dynamicly load module?
I think you just need
```
import foo
print(foo.TL)
print(foo.aaa)
```
If isn't package. You can do before import
```
import sys
sys.path.append()
```
If you need dynamic import. Probably you can make
```
foo = imp.load_source('foo', r'\foo.py')
print(foo.TL)
print(foo.aaa)
```
Upvotes: 1 <issue_comment>username_2: What I was looking for was vars():
print(var) for var in vars(module)
What is yet unknown to me i s how to reduce depth to 1, (if imported module also imports modules, I don't want to see their vars)
Upvotes: 0
|
2018/03/16
| 1,322 | 4,382 |
<issue_start>username_0: I'm having trouble with a piece of code where a typedef array is created of a struct. That typedef is then used in another struct.
When receiving the typedef in a function and initialising the struct with the typedef in it, I only get data of the first element in the array.
Below I have a simplified example of what I'm getting at the moment.
```
struct simple_struct {
double a;
};
typedef struct simple_struct arr_typedef[2];
struct struct_with_typedef {
const arr_typedef typedef_arr;
};
void foo(const arr_typedef arg_s) {
struct struct_with_typedef test = {
*arg_s
};
int i;
for (i = 0; i < 2; i++) {
printf("value: %f \n", test.typedef_arr[i].a);
}
}
int main(int argc, char* argv[]) {
arr_typedef d_test = { {1}, {2} };
foo(d_test);
return 1;
}
```
When compiled, using gcc 4.4, and run I see the following output:
```
~/public > ./test
value: 1.000000
value: 0.000000
```
Would someone be able to explain why the value of the second item isn't available?
If I leave out the dereference I get the following error whilst compiling, which I also don't get:
>
> test.c:82: error: incompatible types when initializing type 'double' using type 'const struct simple\_struct \*'
>
>
>
---
I have removed the typedef and the same result persists. Sort of understand that indeed that one initialiser value is given. But there is only one member in the struct, or are they expanded in the background?
If so, how would you initialise both values?
```
struct simple_struct {
double a;
};
struct struct_with_arr {
const struct simple_struct struct_arr[2];
};
void foo(const struct simple_struct arg_s[2]) {
struct struct_with_arr test = {{*arg_s}};
int i;
for (i = 0; i < 2; i++) {
printf("value: %f \n", test.struct_arr[i].a);
}
}
int main(int argc, char* argv[]) {
struct simple_struct d_test[2] = { {1}, {2} };
foo(d_test);
return 1;
}
```<issue_comment>username_1: In this declaration
```
struct struct_with_typedef test = {
*arg_s
};
```
there is used an object of the type `struct simple_struct` to initialize an array of the type `const arr_typedef`.
Firstly you need to enclose the initializer in braces and add an initializer for the second element of the array if you want to do so.
A correct way to initialize the array is the following
```
#include
struct simple\_struct
{
double a;
};
typedef struct simple\_struct arr\_typedef[2];
struct struct\_with\_typedef
{
const arr\_typedef typedef\_arr;
};
void foo( const arr\_typedef arg\_s )
{
struct struct\_with\_typedef test =
{
{ arg\_s[0], arg\_s[1] }
};
for ( int i = 0; i < 2; i++ )
{
printf("value: %f \n", test.typedef\_arr[i].a);
}
}
int main( void )
{
arr\_typedef d\_test = { {1}, {2} };
foo(d\_test);
return 0;
}
```
The program output is
```
value: 1.000000
value: 2.000000
```
You can write the initializers also the following way
```
struct struct_with_typedef test =
{
{ *arg_s, *( arg_s + 1 ) }
};
```
Take into account that an array designator used as an initializer expression is implicitly converted to pointer to its first element. Dereferencing the pointer you get the first element itself.
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is no syntax in C for initializing an array with a single initializer representing another array to copy values from.
To fix your code you could do one of these options:
1. List the members: `struct struct_with_arr test = {{arg_s[0], arg_s[1]}};`
2. Change the function to accept `struct struct_with_arr` as the parameter type, in which case you can use `struct struct_with_arr test = arg;`
3. Copy without an initializer: `struct struct_with_arr test; memcpy(&test.struct_arr, arg_s, sizeof test.struct_arr);`. (Actually you can't use this option since you have defined the struct element as `const` ... not really a great idea in the first place in my opinion)
---
For completeness I will mention the code:
```
struct struct_with_arr foo = *(struct struct_with_arr *)arg_s;
```
This is one of those things that is technically undefined behaviour (if the argument source was not actually a `struct_with_arr`) but is likely to work on any actual compiler that exists. My advice would be to not do this.
Upvotes: 1
|
2018/03/16
| 875 | 3,258 |
<issue_start>username_0: I am having a lot of confusion using the `sys.getsizeof` function in python. All I want to find out is that for say a floating point value, is the system using 4 or 8 bytes (i.e. single or double precision in C terms).
I do the following:
```
import sys
x = 0.0
sys.getsizeof(x) # Returns 24
type(x) # returns float
sys.getsizeof(float) # Returns 400.
```
How can I simply find out the how many bytes are actually used for the floating point representation. I know it should be 8 bytes but how can I verify this (something like the `sizeof` operator in C++)<issue_comment>username_1: From [the docs](https://docs.python.org/3/library/sys.html#sys.getsizeof):
>
> getsizeof() calls the object’s **sizeof** method and adds an additional garbage collector overhead if the object is managed by the garbage collector.
>
>
>
`sys.getsizeof` is not about the byte size as in C.
For `int` there is [`bit_length()`](https://docs.python.org/3/library/stdtypes.html#int.bit_length).
Upvotes: 2 <issue_comment>username_2: Running
```
sys.getsizeof(float)
```
does not return the size of any individual float, it returns the size of the `float` *class*. That class contains a lot more data than just any single float, so the returned size will also be much bigger.
If you just want to know the size of a single float, the easiest way is to simply instantiate some arbitrary float. For example:
```
sys.getsizeof(float())
```
Note that
```
float()
```
simply returns `0.0`, so this is actually equivalent to:
```
sys.getsizeof(0.0)
```
This returns `24` bytes in your case (and probably for most other people as well). In the case of CPython (the most common Python implementation), every `float` object will contain a reference counter and a pointer to the type (a pointer to the `float` class), which will each be 8 bytes for 64bit CPython or 4 bytes each for 32bit CPython. The remaining bytes (`24 - 8 - 8 = 8` in your case which is very likely to be 64bit CPython) will be the bytes used for the actual float value itself.
This is not guaranteed to work out the same way for other Python implementations though. The [language reference](https://docs.python.org/3/reference/datamodel.html#objects-values-and-types) says:
>
> These represent machine-level double precision floating point numbers. You are at the mercy of the underlying machine architecture (and C or Java implementation) for the accepted range and handling of overflow. Python does not support single-precision floating point numbers; the savings in processor and memory usage that are usually the reason for using these are dwarfed by the overhead of using objects in Python, so there is no reason to complicate the language with two kinds of floating point numbers.
>
>
>
and I'm not aware of any runtime methods to accurately tell you the number of bytes used. However, note that the quote above from the language reference does say that Python only supports double precision floats, so in most cases (depending on how critical it is for you to always be 100% right) it should be comparable to double precision in C.
Upvotes: 5 [selected_answer]<issue_comment>username_3: ```
import ctypes
ctypes.sizeof(ctypes.c_double)
```
Upvotes: 3
|
2018/03/16
| 1,301 | 4,708 |
<issue_start>username_0: I researched and found out oci\_connect() is the way to go. I found out that i could either use the Connect Name from the tnsnames.ora file or use an easy connect syntax. Since my database isn't locally stored and I had no idea where the said, tnsnames.ora file was located in apex.oracle.com, I went with easy connect strings.Here's what I've done so far.
```
$username = "myemail";
$host = "apex.oracle.com";
$dbname = "name";
$password = "<PASSWORD>";
// url = username@host/db_name
$dburl = $username . "@".$host."/".$dbname;
$conn = oci_connect ($username, $password, $dburl);
if(!$conn) echo "Connection failed";
```
I get a
```
Call to undefined function oci_connect()
```
So what would be the way to go?
UPDATE 1:
---------
Here's the list of things I did:
* Installed Oracle DB
* Unzipped Oracle Instance client
* Set the environment variables
* Uncommented the extension=php\_oci8\_12c.dll in php.ini
* Copied all the \*.dll files from the instance client folder to xampp/php and xampp/apache/bin
* also made sure the php/ext folder had the required dlls.
That was last night. I have restarted my PC multiple times, APACHE with it but I'm still getting this error:
```
Call to undefined function oci_connect()
```
At this point I'm frustrated and don't know where to go from here. PHP just doesn't seem to link up to oci8. I can view the databases I made from Database Configuration Assistant in cmd from 'sqlplus' command and a few select statements. So everything seems to be setup right, its just the php that's having problems trying to use `oci_connect()`.
My database.php, now is setup as:
```
public function __construct()
{
error_reporting(E_ALL);
if (function_exists("oci_connect")) {
echo "oci_connect found\n";
} else {
echo "oci_connect not found\n";
exit;
}
$host = 'localhost';
$port = '1521';
// Oracle service name (instance)
$db_name = 'haatbazaar';
$db_username = "SYSTEM";
$db_password = "<PASSWORD>";
$tns = "(DESCRIPTION =
(CONNECT_TIMEOUT=3)(RETRY_COUNT=0)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = $host)(PORT = $port))
)
(CONNECT_DATA =
(SERVICE_NAME = $db_name)
)
)";
$tns = "$host:$port/$db_name";
try {
$conn = oci_connect($db_username, $db_password, $tns);
if (!$conn) {
$e = oci_error();
throw new Exception($e['message']);
}
echo "Connection OK\n";
$stid = oci_parse($conn, 'SELECT * FROM ALL_TABLES');
if (!$stid) {
$e = oci_error($conn);
throw new Exception($e['message']);
}
// Perform the logic of the query
$r = oci_execute($stid);
if (!$r) {
$e = oci_error($stid);
throw new Exception($e['message']);
}
// Fetch the results of the query
while ($row = oci_fetch_array($stid, OCI_ASSOC + OCI_RETURN_NULLS)) {
$row = array_change_key_case($row, CASE_LOWER);
print_r($row);
break;
}
// Close statement
oci_free_statement($stid);
// Disconnect
oci_close($conn);
}
catch (Exception $e) {
print_r($e);
}
}
```
And it outputs:
`oci_connect not found`
OCI8 is listed in my `phpInfo().`<issue_comment>username_1: First, this has been asked before, but [Oracle doesn't allow remote database connections to their free apex.oracle.com example service](https://stackoverflow.com/questions/33518409/is-it-possible-to-connect-to-db-in-oracle-apex-application-externally). Sorry. You can only interact with it through the web interface.
Second, if you do find a remote Oracle db to connect to, you'll need to install the [Oracle Instant Client](http://www.oracle.com/technetwork/database/database-technologies/instant-client/overview/index.html) for your OS, and [configure the PHP OCI8 extension](http://php.net/manual/en/oci8.installation.php).
Upvotes: 0 <issue_comment>username_2: Okay I found out the culprit behind this whole ordeal. I had set the `PATH` Environment Variables but apparently forgot to add a new system environment variable named `TNS_ADMIN` and set the directory to `PATH/TO/INSTANCE/CLIENT`.
Here's the list of System Environment variable you need to add:
* Edit `PATH` system variable and add the `$ORACLE_HOME/bin` dir
* Edit `PATH` system variable and add the Instance Client dir
* Add new system variable, name it `TNS_ADMIN` and add the Instance Client dir
I hope this helps those who come looking.
Upvotes: 2 [selected_answer]
|