date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/15
| 803 | 2,526 |
<issue_start>username_0: I'm writing a PPM file writer. The file format description can be found [here](https://en.wikipedia.org/wiki/Netpbm_format). I want to write the file in binary format, which is the `P6` type.
>
> The P6 binary format of the same image represents each color component of each pixel with one byte (thus three bytes per pixel) in the order red, green, then blue. The file is smaller, but the color information is difficult to read by humans.
>
>
>
My image buffer is originally an array of `Vec3f`, which is inside a `float[3]` so I can manage to convert the `Vec3f` array to `float` array.
What I failed to do is convert the `float` array to an `unsigned char` array so that I can write to a `.ppm` file.
Here's my code:
```
void writePPM(char *fn, Vec3f *buffer, int w, int h) {
FILE *fp = fopen(fn, "wb");
unsigned char *char_buffer = new unsigned char[w*h*3];
for (int i = 0; i < w*h; i+=3) {
char_buffer[i] = (unsigned char)buffer[i].x();
char_buffer[i+1] = (unsigned char)buffer[i].y();
char_buffer[i+2] = (unsigned char)buffer[i].z();
}
fprintf(fp, "P6\n%d %d\n\255\n", w, h);
fwrite(char_buffer, sizeof(unsigned char), w*h*3, fp);
fclose(fp);
}
```
>
> buffer[i].x(), buffer[i].y(), buffer[i].z() range from [0,255]
>
>
>
The generated image is a completely black one, but width and height matches my image buffer, so I thought the `fwrite` part is wrong. How can I fix it?
I'm also confused how `float` be converted to `char`? `char` is 8-bit and `float` is bigger so there must be some data loss?<issue_comment>username_1: buffer index check.
```
for (int i = 0; i < w*h; i+=3) {
char_buffer[i] = (unsigned char)buffer[i/3].x();
char_buffer[i+1] = (unsigned char)buffer[i/3].y();
char_buffer[i+2] = (unsigned char)buffer[i/3].z();
}
```
Upvotes: 1 <issue_comment>username_2: This is a bit of a guess but I would expect a pixel specified as three floats would have each float in the range [0, 1] with 0 being black and 1 being white. Whereas I would expect your char based pixel would expect each element to be in the range [0, 255] with 0 being black and 255 being white.
The cast `(char) someFloat` converts a `float` to an `char` by truncating the fractional part, so all your pixels will end up black - or very nearly black.
Try multiplying the float by 255 before the cast (and make sure the `char` is unsigned) i.e. `(unsigned char)(float * 255)`
Upvotes: 0
|
2018/03/15
| 507 | 1,893 |
<issue_start>username_0: I have 40 database attached to my MS SQL Server 2014 Instance.
I need to create a new user with Read-Only access to 1 database only. He can't even see the names of databases attached to my instance.
I have already applied a solution i found at <https://stackoverflow.com/a/10219004/4728323> But this gives the read-only access to 1 database only. But the names of my other databases are visible when login with that new user (Access is not allowed, but db names are visible in the database list).<issue_comment>username_1: As specified in <https://technet.microsoft.com/en-us/library/ms189077(v=sql.105).aspx>, "By default, the VIEW ANY DATABASE permission is granted to the public role. Therefore, by default, every user that connects to an instance of SQL Server can see all databases in the instance."
To accomplish what you want, you should revoke the `VIEW ANY DATABASE` permission for the `public` role:
```
REVOKE VIEW ANY DATABASE TO public
```
Alternatively, you can deny this permission only to that user:
```
DENY VIEW ANY DATABASE TO YourRestrictedUser
```
Upvotes: 0 <issue_comment>username_2: This is possible to achieve.
Contained Databases are available from MS SQL 2012 to onward.
1- Change the MS SQL Server instance setting by Clicking on the Server Name in the Object Explorer.
2- Go to Advanced Tab in the Properties window.
3- Set the property "Enable Contained Properties" to "True", Click "Ok".
4- Now Go the properties of your desired Database by right clicking on the database.
5- To to the "Options" tab and Set the "Containment Type" to "Partial".
6- Now Right click on users node inside the database and add user, select the "SQL User with Password" option and create the new user.
Source of Information:
<https://www.mssqltips.com/sqlservertip/2428/sql-server-2012-contained-database-feature/>
Upvotes: 3 [selected_answer]
|
2018/03/15
| 549 | 1,855 |
<issue_start>username_0: With following script I'd like to add a group to an AD user, but it doesn't do it, and I get no error.
```
param($Userid,$AdditionalGroup)
# Get user
$User = Get-ADUser `
-Filter "SamAccountName -eq $Userid"
# Add comment
Add-ADGroupMember `
-Identity $AdditionalGroup `
-Members $User
```<issue_comment>username_1: Filtering like that didn't work for me (and generated an error), however adding `'` before and after $Userid did the trick.
```
param($Userid,$AdditionalGroup)
# Get user
$User = Get-ADUser `
-Filter "SamAccountName -eq '$Userid'"
# Add comment
Add-ADGroupMember `
-Identity $AdditionalGroup `
-Members $User
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: This will also work (without the single quotes):
```
$User = Get-ADUser -Filter {SamAccountName -eq $Userid}
```
Upvotes: 0 <issue_comment>username_3: As you're only doing a straight `-eq` match on sAMAccountName you don't need to use `-Filter`, the `Identity` param will accept this along with other inputs:
>
> * A distinguished name
> * A GUID (objectGUID)
> * A security identifier (objectSid)
> * A SAM account name (sAMAccountName)
>
>
> ([documentation link](https://learn.microsoft.com/en-us/powershell/module/addsadministration/get-aduser#required-parameters))
>
>
>
Which makes your code very simple:
```
$User = Get-ADUser -Identity $Userid
```
---
To simplify it even further, you don't even need to use `Get-ADUser` at all!
`Add-ADGroupMember -Members` ([link](https://learn.microsoft.com/en-us/powershell/module/addsadministration/add-adgroupmember#required-parameters)) accepts the same parameters as I mentioned for `Identity` ...
So you can use `$UserID` directly:
```
param($Userid,$AdditionalGroup)
Add-ADGroupMember -Identity $AdditionalGroup -Members $UserID
```
Upvotes: 1
|
2018/03/15
| 1,093 | 3,374 |
<issue_start>username_0: [Screensnot of my project folder](https://i.stack.imgur.com/AdIiz.png)
can any please help me to start a new simple project in Sublime..
this is my index.js
```
import React, {Component} from 'react';
import ReactDOM from 'react-dom';
class Blog extends Component {
render(){
const sidebar = (
{this.props.posts.map((post) =>
* {post.title}
)}
);
const content = this.props.posts.map((post) =>
### {post.title}
{post.content}
);
return (
{sidebar}
---
{content}
);
}
}
const posts = [
{id: 1, title: 'Hello World', content: 'Welcome to learning React!'},
{id: 2, title: 'Installation', content: 'You can install React from npm.'}
];
ReactDOM.render(
,
document.getElementById('root')
);
```
this is my package.json
```
{
"name": "mypro",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "react-scripts start",
"build": "react-scripts build"
},
"author": "",
"license": "ISC",
"dependencies": {
"babel": "^6.23.0",
"babel-cli": "^6.26.0",
"eslint": "^4.18.2",
"react": "^16.2.0",
"react-dom": "^16.2.0"
},
"devDependencies": {
"react-scripts": "^1.1.1"
}
}
```
is there any thing needed for run this project..
I'm trying to run this project cmd by installing react,reactdom etc then i install npm by npm i. Then I'm trying to run project my npm run start
in cmd
```
> mypro@1.0.0 start E:\Projects\MyPro
> react-scripts start
Could not find a required file.
Name: index.html
Searched in: E:\Projects\MyPro\public
npm ERR! Windows_NT 6.3.9600
npm ERR! argv "C:\\Program Files\\nodejs\\node.exe" "C:\\Program Files\\nodejs\\
node_modules\\npm\\bin\\npm-cli.js" "run" "start"
npm ERR! node v7.10.1
npm ERR! npm v4.2.0
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! mypro@1.0.0 start: `react-scripts start`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the mypro@1.0.0 start script 'react-scripts start'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the mypro package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! react-scripts start
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs mypro
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls mypro
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! C:\Users\keystrokeslnc08\AppData\Roaming\npm-cache\_logs\2018-03-15
T08_39_46_854Z-debug.log
```
this is my index. html page
```
MyProject
hlo everyone
============
```
this errors shows.. What I'll do next??<issue_comment>username_1: You should have react-scripts as a devDependencies in your package.json
```
npm install --save-dev react-scripts
```
Should be better
Upvotes: 2 <issue_comment>username_2: best way to start a reactjs simple project
first install nodejs (you need a server to run your app locally)
open command prompt
1: npm install -g create-react-app
2: create-react-app your-app-name
go to directory path (ex: cd your-app-name)
finally : npm start
that's it
Upvotes: -1
|
2018/03/15
| 488 | 1,771 |
<issue_start>username_0: Actually I am trying to launch the AVD emulator in Android Studio. The Problem: "Intel HAXM is required to run this AVD".
I know that this is a common problem, however none of the existing answers I found could solve my problem:
* I enabled "Intel Virtualization Technology" in the BIOS
* VT-d is Enabled
* The NX-Disable-bit is Enabled
* The status of Intel x86 Emulator Accelerator (HAXM installer) is "Installed"
* In "Turn Windows Features on or off" I did not find Hyper-V, I tried to disable it using dism.exe but dism did not find it either
* I have Avast installed, but the Hardware Acceleration setting is UNchecked
* I ran the intelhaxm-android.exe, but "Remove" was the only clickable option
* I downloaded the lates intel HAXM version from the intel site, but during the installation process the message "This computer meets the requirements for HAXM, but Intel Virtualization Technology (VT-x) is not turned on"
At the moment I am really frustrated since it seems that these steps worked for the vast majority. I would be really happy about any kind of help.
**UPDATE**:
With the help of Dhinakaran I was able to find in the task manager:
>
> Virtualization: Deactivated
>
>
> Hyper-V-Support: Yes
>
>
>
My OS is Windows 10 Home.<issue_comment>username_1: You should have react-scripts as a devDependencies in your package.json
```
npm install --save-dev react-scripts
```
Should be better
Upvotes: 2 <issue_comment>username_2: best way to start a reactjs simple project
first install nodejs (you need a server to run your app locally)
open command prompt
1: npm install -g create-react-app
2: create-react-app your-app-name
go to directory path (ex: cd your-app-name)
finally : npm start
that's it
Upvotes: -1
|
2018/03/15
| 1,680 | 6,361 |
<issue_start>username_0: C#
I searched but couldn't find the specific situation I find myself in. How do I call a method in one class from another class?
```
public class Box
{
public double length; // Length of a box
public double breadth; // Breadth of a box
public double height; // Height of a box
public double Volume(double len, double bre, double hei)
{
double totvolume;
totvolume = len * bre * hei;
return totvolume;
}
}
public class Boxtester
{
static void Main(string[] args)
{
Box Box1 = new Box(); //create new class called Box1
Box Box2 = new Box(); //create new class called Box2
double returnvolume; //create variable to hold the returned data from method
// box 1 specification
Box1.length = 6.0;
Box1.breadth = 7.0;
Box1.height = 5.0;
// box 2 specification
Box2.length = 11.0;
Box2.breadth = 16.0;
Box2.height = 12.0;
//Calculate and display volume of Box1
Box.Volume volumebox1 = new Box.Volume(); //creating new instance of Volume method called volumebox1
returnvolume = volumebox1.Volume(Box1.length, Box1.breadth, Box1.height); //giving variables to method
Console.WriteLine("Volume of Box1 : {0}", volumebox1); //write return value
//Calculate and display volume of Box2
Box.Volume volumebox2 = new Box.Volume(); //creating new instance of Volume method called volumebox2
returnvolume = volumebox2.Volume(Box2.length, Box2.breadth, Box2.height); //giving variables to method
Console.WriteLine("Volume of Box1 : {0}", volumebox2); //write return value
Console.ReadKey();
}
```
This gives the error "The type name 'Volume' does not exist in the type 'Box'<issue_comment>username_1: Your code should call the `Volume()` method, not creating a variable of type Volume which doesn't exist. Also, `Volume()` should use class members values, not parameters:
```
public class Box
{
public double length; // Length of a box
public double breadth; // Breadth of a box
public double height; // Height of a box
public double Volume()
{
double totvolume;
totvolume = length * breadth * height;
return totvolume;
}
}
public class Boxtester
{
static void Main(string[] args)
{
Box Box1 = new Box(); //create new variable of type Box called Box1
Box Box2 = new Box(); //create new variable of type Box called Box2
double returnvolume; //create variable to hold the returned data from method
// box 1 specification
Box1.length = 6.0;
Box1.breadth = 7.0;
Box1.height = 5.0;
// box 2 specification
Box2.length = 11.0;
Box2.breadth = 16.0;
Box2.height = 12.0;
//Calculate and display volume of Box1
returnvolume = Box1.Volume();
Console.WriteLine("Volume of Box1 : {0}", volumebox1); //write return value
//Calculate and display volume of Box2
returnvolume = Box2.Volume();
Console.WriteLine("Volume of Box2 : {0}", volumebox2); //write return value
Console.ReadKey();
}
```
Upvotes: 2 <issue_comment>username_2: `Box.Volume volumebox1 = new Box.Volume();` this line won't work. You need to call the method of box1 and box2 itself.
The right call would be:
```
double Box1Volume = Box1.Volume(Box2.length, Box2.breadth, Box2.height);
```
I would make the volume a property, that way you can access is like a variable but get the right values.
Personally I prefer to work with Properties, that way you can, if needed, manipulate the values before accessing if that's needed constantly, like volume.
Here's my example of for it:
```
using System;
public class Program
{
public static void Main()
{
Box Box1 = new Box(); //create new class called Box1
Box Box2 = new Box(); //create new class called Box2
// box 1 specification
Box1.Length = 6.0;
Box1.Breadth = 7.0;
Box1.Height = 5.0;
// box 2 specification
Box2.Length = 11.0;
Box2.Breadth = 16.0;
Box2.Height = 12.0;
Console.WriteLine("Volume of Box1 : {0}", Box1.Volume);
Console.WriteLine("Volume of Box2 : {0}", Box2.Volume);
}
}
public class Box
{
public double Length {get;set;} // Length of a box
public double Breadth {get;set;} // Breadth of a box
public double Height {get;set;}// Height of a box
public double Volume { get { return Length * Breadth * Height;}}
}
```
this gives the output
```
Volume of Box1 : 210
Volume of Box2 : 2112
```
Upvotes: 0 <issue_comment>username_3: ```
public static class Box1
{
public static double length; // Length of a box
public static double breadth; // Breadth of a box
public static double height; // Height of a box
public static double Volume(double len, double bre, double hei)
{
double totvolume;
totvolume = len * bre * hei;
return totvolume;
}
}
public static class Box2
{
public static double length; // Length of a box
public static double breadth; // Breadth of a box
public static double height; // Height of a box
public static double Volume(double len, double bre, double hei)
{
double totvolume;
totvolume = len * bre * hei;
return totvolume;
}
}
public class Boxtester
{
static void Main(string[] args)
{
// box 1 specification
Box1.length = 6.0;
Box1.breadth = 7.0;
Box1.height = 5.0;
// box 2 specification
Box2.length = 11.0;
Box2.breadth = 16.0;
Box2.height = 12.0;
Console.WriteLine("Volume of Box1 : {0}", Box1.Volume(Box1.length, Box1.breadth, Box1.height));
Console.WriteLine("Volume of Box1 : {0}", Box2.Volume(Box2.length, Box2.breadth, Box2.height));
Console.ReadKey();
}
}
```
Upvotes: 0
|
2018/03/15
| 1,494 | 5,363 |
<issue_start>username_0: I try to run an SQL query:
```
SELECT (SELECT
SUM(fldAmount)
FROM tblSession s
LEFT JOIN tblTransMain m
ON s.fldStore = m.fldStore
AND s.fldSessionID = m.fldSessionID
LEFT JOIN tblTransPayments t
ON m.fldStore = t.fldStore
AND m.fldSequence = t.fldSequence
WHERE s.fldStore = 3
AND s.fldDrawer = 8003)
```
it takes a very long time (Around 30-40 sec)
if I remove the first `SELECT` - the query takes 0.05 miliseconds
I need the first select because the actual query I need to run is more complex:
```
SELECT (SELECT
SUM(fldAmount)
FROM tblSession s
LEFT JOIN tblTransMain m
ON s.fldStore = m.fldStore
AND s.fldSessionID = m.fldSessionID
LEFT JOIN tblTransPayments t
ON m.fldStore = t.fldStore
AND m.fldSequence = t.fldSequence
WHERE s.fldStore = 3
AND s.fldDrawer = 8003
AND p.fldID = t.fldPaymeansGroup)
FROM tblPaymeansGroups p
```<issue_comment>username_1: Your code should call the `Volume()` method, not creating a variable of type Volume which doesn't exist. Also, `Volume()` should use class members values, not parameters:
```
public class Box
{
public double length; // Length of a box
public double breadth; // Breadth of a box
public double height; // Height of a box
public double Volume()
{
double totvolume;
totvolume = length * breadth * height;
return totvolume;
}
}
public class Boxtester
{
static void Main(string[] args)
{
Box Box1 = new Box(); //create new variable of type Box called Box1
Box Box2 = new Box(); //create new variable of type Box called Box2
double returnvolume; //create variable to hold the returned data from method
// box 1 specification
Box1.length = 6.0;
Box1.breadth = 7.0;
Box1.height = 5.0;
// box 2 specification
Box2.length = 11.0;
Box2.breadth = 16.0;
Box2.height = 12.0;
//Calculate and display volume of Box1
returnvolume = Box1.Volume();
Console.WriteLine("Volume of Box1 : {0}", volumebox1); //write return value
//Calculate and display volume of Box2
returnvolume = Box2.Volume();
Console.WriteLine("Volume of Box2 : {0}", volumebox2); //write return value
Console.ReadKey();
}
```
Upvotes: 2 <issue_comment>username_2: `Box.Volume volumebox1 = new Box.Volume();` this line won't work. You need to call the method of box1 and box2 itself.
The right call would be:
```
double Box1Volume = Box1.Volume(Box2.length, Box2.breadth, Box2.height);
```
I would make the volume a property, that way you can access is like a variable but get the right values.
Personally I prefer to work with Properties, that way you can, if needed, manipulate the values before accessing if that's needed constantly, like volume.
Here's my example of for it:
```
using System;
public class Program
{
public static void Main()
{
Box Box1 = new Box(); //create new class called Box1
Box Box2 = new Box(); //create new class called Box2
// box 1 specification
Box1.Length = 6.0;
Box1.Breadth = 7.0;
Box1.Height = 5.0;
// box 2 specification
Box2.Length = 11.0;
Box2.Breadth = 16.0;
Box2.Height = 12.0;
Console.WriteLine("Volume of Box1 : {0}", Box1.Volume);
Console.WriteLine("Volume of Box2 : {0}", Box2.Volume);
}
}
public class Box
{
public double Length {get;set;} // Length of a box
public double Breadth {get;set;} // Breadth of a box
public double Height {get;set;}// Height of a box
public double Volume { get { return Length * Breadth * Height;}}
}
```
this gives the output
```
Volume of Box1 : 210
Volume of Box2 : 2112
```
Upvotes: 0 <issue_comment>username_3: ```
public static class Box1
{
public static double length; // Length of a box
public static double breadth; // Breadth of a box
public static double height; // Height of a box
public static double Volume(double len, double bre, double hei)
{
double totvolume;
totvolume = len * bre * hei;
return totvolume;
}
}
public static class Box2
{
public static double length; // Length of a box
public static double breadth; // Breadth of a box
public static double height; // Height of a box
public static double Volume(double len, double bre, double hei)
{
double totvolume;
totvolume = len * bre * hei;
return totvolume;
}
}
public class Boxtester
{
static void Main(string[] args)
{
// box 1 specification
Box1.length = 6.0;
Box1.breadth = 7.0;
Box1.height = 5.0;
// box 2 specification
Box2.length = 11.0;
Box2.breadth = 16.0;
Box2.height = 12.0;
Console.WriteLine("Volume of Box1 : {0}", Box1.Volume(Box1.length, Box1.breadth, Box1.height));
Console.WriteLine("Volume of Box1 : {0}", Box2.Volume(Box2.length, Box2.breadth, Box2.height));
Console.ReadKey();
}
}
```
Upvotes: 0
|
2018/03/15
| 1,333 | 2,751 |
<issue_start>username_0: I have a simple js program.
It creates a tile based map for a game.
The entire canvas is colored green but it should not happen.
Grids on the canvas should be colored based on if their cell is `0` or `1`
```js
var tileW = 40,
tileH = 40;
var mapW = 10,
mapH = 10;
var gameMap = [
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 0, 1, 1, 1, 1, 0],
[0, 1, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 1, 0, 1, 0, 0, 0, 1, 1, 0],
[0, 1, 0, 1, 0, 0, 0, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 1, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 1, 1, 1, 0, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
];
var canvas = document.getElementById("canvas");
var ctx = canvas.getContext("2d");
function draw_map() {
for (var x = 0; x < mapW; x++) {
for (var y = 0; y < mapH; y++) {
switch (gameMap[y][x]) {
case 0:
ctx.fillStyle = "#999787";
case 1:
ctx.fillStyle = "#ccffcc";
}
ctx.fillRect(x * tileW, y * tileH, tileW, tileH); //->>?? this part is creating problem
}
}
}
draw_map();
```<issue_comment>username_1: You're missing `break`s in your switch statement, so it's falling through from black to green. Try this instead.
```
switch(gameMap[y][x]) {
case 0:
ctx.fillStyle="#999787";
break;
case 1:
ctx.fillStyle="#ccffcc";
break;
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You're missing the break; statements on switch:
```
switch (gameMap[y][x]) {
case 0:
ctx.fillStyle = "#999787";
break; // <------- here
case 1:
ctx.fillStyle = "#ccffcc";
break; // <------- here
}
```
Upvotes: 1 <issue_comment>username_3: ```
var tileW=40, tileH=40;
var mapW=10, mapH=10;
var gameMap = [
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 0, 1, 1, 1, 1, 0],
[0, 1, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 1, 0, 1, 0, 0, 0, 1, 1, 0],
[0, 1, 0, 1, 0, 0, 0, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 1, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 1, 1, 1, 0, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
];
var canvas = document.getElementById("canvas");
var ctx = canvas.getContext("2d");
function draw\_map(){
for(var x=0; x<mapW; x++) {
for(var y=0; y<mapH; y++) {
switch(gameMap[y][x]) {
case 0:
ctx.fillStyle="#999787";
ctx.fillRect(x\*tileW, y\*tileH, tileW, tileH);
break;
case1:
ctx.fillStyle="#ccffcc";
ctx.fillRect(x\*tileW, y\*tileH, tileW, tileH);
break;
}
}
}
}
draw\_map();
```
You need to render every tile in cases of switch with `break;` statement at the end of case.
Upvotes: 1
|
2018/03/15
| 242 | 731 |
<issue_start>username_0: I'm trying to convert values from character to numeric, but R introduces NAs. I have examined one of the problematic values:
```
> x
[1] "11 914 711.5"
> length(x)
[1] 1
> typeof(x)
[1] "character"
> as.numeric(x)
[1] NA
Warning message:
NAs introduced by coercion
```
Does anybody have any idea how to fix this?<issue_comment>username_1: You could first split your string and then convert them to numeric:
```
as.numeric(unlist(strsplit(x, ' ')))
[1] 11.0 914.0 711.5
```
Upvotes: 2 <issue_comment>username_2: It's not clear what you want. The previous answers and comments assumes you have several numbers in 'x'. Another interpretation gives this solution:
as.numeric(gsub(" ","",x))
Upvotes: 0
|
2018/03/15
| 248 | 806 |
<issue_start>username_0: I'm having a question on my following code. When it's declaring the function get\_name() in the class, what would be the purpose of putting the const in front of it. And what will happen if I take it off?
```
class College
{
string name;
const string get_name()
{ return name; }
void set_name(string n)
{ name = n; }
};
int main()
{
College college_obj;
return 0;
}
```<issue_comment>username_1: You could first split your string and then convert them to numeric:
```
as.numeric(unlist(strsplit(x, ' ')))
[1] 11.0 914.0 711.5
```
Upvotes: 2 <issue_comment>username_2: It's not clear what you want. The previous answers and comments assumes you have several numbers in 'x'. Another interpretation gives this solution:
as.numeric(gsub(" ","",x))
Upvotes: 0
|
2018/03/15
| 438 | 1,752 |
<issue_start>username_0: I've got a DAG that's scheduled to run daily. In most scenarios, the scheduler would trigger this job as soon as the `execution_date` is complete, i.e., the next day. However, due to upstream delays, I only want to kick off the dag run for the `execution_date` three days after `execution_date`. In other words, I want to introduce a three day lag.
From the research I've done, one route would be to add a `TimeDeltaSensor` at the beginning of my dag run with `delta=datetime.timedelta(days=3)`.
However, due to the way the Airflow scheduler is implemented, that's problematic. Under this approach, each of my DAG runs will be active for over three days. My DAG has lots of tasks, and if several DAG runs are active, I've noticed that the scheduler eats up lots of CPU because it's constantly iteration over all these tasks (even inactive tasks). So is there another way to just tell the scheduler to not kick off the DAG run until three days have passed?<issue_comment>username_1: One possible solution could be to have `max_active_runs` set to `1` for the DAG. While this does not prevent the DAG from being active for 3 days it would prevent multiple DAG runs from being initiated.
Upvotes: 0 <issue_comment>username_2: It might be easier to manipulate the date variable within the DAG.
I am assuming you would be using the execution date `ds` in your task instances in some way, like querying data for the given day.
In this case you could use the built in macros to manipulate the date like `macros.ds_add(ds, -3)` to simply adjust the date to minus 3 days.
You can use it in a template field as usual `'{{ macros.ds_add(ds, -3) }}'`
Macro docs [here](https://airflow.incubator.apache.org/code.html#id2)
Upvotes: 2
|
2018/03/15
| 1,261 | 4,468 |
<issue_start>username_0: Brief information image, what my code do:
[](https://i.stack.imgur.com/vHqFo.png)
And this is my code:
```
private void CheckObjectAttackPoints(Point AttackPoint){
Point ObjectAttackPoint = AttackPoint;
ObjectAttackPoint.X -= 1;
int count=0; //This variable for reading how many tiles are false
//Check tiles active and ObjectAttackPoint is exist in List
for (int i=0; i < 1;i++) {
if (GameManager.AllPoints.Contains (ObjectAttackPoint)) {
if (!GameManager.TileColliders [ObjectAttackPoint.X, ObjectAttackPoint.Y].activeSelf) {
count++;
ObjectAttackPoints [i] = ObjectAttackPoint;
Debug.Log (ObjectAttackPoints [i].X +", " + ObjectAttackPoints [i].Y);
}
}
if (i == 1) {
break;
}
ObjectAttackPoint.X += 2;
}
if (count > 0) {
Debug.Log ("Object can attack " + count + " points");
}
}
```
So, this method need `AttackPoint` which already has `AttackPoint.Y-1` value, we need only to check if `AttackPoint.X` exists in `List` and check if object at this point is active. At the method start, `AttackPoint.X` decreases its value by 1.
My problem is that console returns only one point even if two tiles are not active (Example image: 0,0 and 0,2 tiles are not active, console returns only count=1 and tile with Point 0,0), this means that method checks only one tile and my code have a mistake, but I can't understand where it is.
Can someone help me?<issue_comment>username_1: The error is in the cycle definition:
>
> for (int i=0; i < 1;i++)
>
>
>
This means the code will be executed only once, for i=0. You could fix it like this:
```
for (int i=0;i<=1;i++)
```
Upvotes: 2 <issue_comment>username_2: Your `for` loop is executed always exactly just ONCE, your method's code is equivalent to this:
```cs
private void CheckObjectAttackPoints(Point AttackPoint) {
Point ObjectAttackPoint = AttackPoint;
ObjectAttackPoint.X -= 1;
int count = 0; //This variable for reading how many tiles are false
//Check tiles active and ObjectAttackPoint is exist in List
if (GameManager.AllPoints.Contains(ObjectAttackPoint)) {
if (!GameManager.TileColliders[ObjectAttackPoint.X, ObjectAttackPoint.Y].activeSelf) {
count++;
ObjectAttackPoints[0] = ObjectAttackPoint;
Debug.Log(ObjectAttackPoints[0].X + ", " + ObjectAttackPoints[0].Y);
}
}
ObjectAttackPoint.X += 2;
if (count > 0) {
Debug.Log("Object can attack " + count + " points");
}
}
```
If you want to cycle through all elements of `ObjectAttackPoint`, you should use
If it's a `List`:
```cs
for (int i=0; i < ObjectAttackPoint.Count;i++)
```
If it's an `array`:
```cs
for (int i=0; i < ObjectAttackPoint.Length;i++)
```
**EDIT**: what's the use of the `break` condition when `i==1` anyway? Even if you extend the for loop, that `if` will break the loop exactly after two loops (i.e. after it's executed for `i == 0` and `i == 1`). You should explain more in depth what's `ObjectAttackPoints`.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Welcome to StackOverflow.
There are a few problems with this code:
1. The `i < 1` condition won't allow the for loop to execute more than once.
2. This code can be considered as a fairly good example of 'spaghetti code', it's considered a bad practice to use `break` operator in a loop unless it's absolutely necessary.
3. There is no need to use a for loop to perform a limited number of operations that are known in advance
4. Check for each attack point could be and imho *should be* done in a separate method.
You can do something along the lines of:
```
private bool CheckObjectAttackPoint(Point AttackTo)
{
return GameManager.AllPoints.Contains(AttackTo) && !GameManager.TileColliders[AttackTo.X, AttackTo.Y].activeSelf
}
private bool CheckObjectAttackPoint(Point AttackFrom, int xDiff, yDiff)
{
var pointToAttack = new Point(AttackFrom.X + xDiff, AttackFrom.Y + yDiff);
return CheckObjectAttackPoint(pointToAttack);
}
```
Now you can use this method by providing it with attack points:
```
Point objCurrentPoint = ...; // Currect position is (1;1)
CheckObjectAttackPoint(objCurrentPoint, -1, -1);
CheckObjectAttackPoint(objCurrentPoint, +1, -1);
```
Upvotes: 2
|
2018/03/15
| 872 | 2,448 |
<issue_start>username_0: I am working in a e-commerce application,In this I want to achieve the task mentioned below...
[](https://i.stack.imgur.com/sYoG8.png)
When I hover on smaller images,I need to show the respective images on the big box.
In this the big image size is 2000 \* 2000 pixels
and the smaller image size is 80 \* 80 But I have the big images for the respective smaller images in other folder.I want to load the big image into big box when I hover on the related smaller image...
I have done something but It is not working for me...
```js
$('img[id^=sm00]').click(function() {
$('#ProductImage').attr('src', $(this).attr('src'));
});
```
```html






```<issue_comment>username_1: ```css
.col-xs-6{
height: 50vh;
}
.col-xs-6{
background: url('http://joelbonetr.com/images/fwhee.png');
background-size: cover;
}
.col-xs-6:hover{
height: 75vh;
background: url('http://joelbonetr.com/images/lightwheel.jpg');
background-size: cover;
}
```
```html
```
i think this is what you asked for
Upvotes: 0 <issue_comment>username_2: This should work. You are calling the small image in your code, but you need to call the path to big image, which I put in the `ref` attribute in this example. Also you are using click method, so I changed that to hover.
```js
$('img[id^=sm00]').hover(function() {
$('#ProductImage').attr('src', $(this).attr('ref'));
});
```
```html






```
Upvotes: 2 [selected_answer]<issue_comment>username_3: You can use `.data()` for this, check updated snippet below...
```js
$('img[id^=sm00]').hover(function() {
$('#ProductImage').attr('src', $(this).data('img'));
});
```
```html






```
Upvotes: 2
|
2018/03/15
| 791 | 2,660 |
<issue_start>username_0: We have a legacy codebase where rubocop reports some of these errors which I never could wrap my head around:
>
> Don't extend an instance initialized by `Struct.new`.
> Extending it introduces a superfluous class level and may also introduce
> weird errors if the file is required multiple times.
>
>
>
What exactly is meant by "a superfluous class level" and what kind of "weird errors" may be introduced?
(Asking because obviously we didn't have any such problems over the last years.)<issue_comment>username_1: A superfluous class level is exactly this class entending `Struct.new`.
[Here is the reference](https://github.com/bbatsov/ruby-style-guide#no-extend-struct-new) to a more detailed explanation with a source code.
The [pull request on this cop](https://github.com/bbatsov/rubocop/pull/1571#issuecomment-71946214) also contains a valuable example:
```
Person = Struct.new(:first, :last) do
SEPARATOR = ' '.freeze
def name
[first, last].join(SEPARATOR)
end
end
```
is not equivalent to:
```
class Person < Struct.new(:first, :last)
SEPARATOR = ' '.freeze
def name
[first, last].join(SEPARATOR)
end
end
```
The former creates `::Person` and `::SEPARATOR`, while the latter creates `::Person` and `::Person::SEPARATOR`.
I believe the constant lookup is mostly referred as “weird errors.”
Upvotes: 2 <issue_comment>username_2: `Struct.new` creates an anonymous class that happens to be a subclass of `Struct`:
```
s = Struct.new(:foo)
#=> #
s.ancestors
#=> [#, Struct, Enumerable, Object, Kernel, BasicObject]
```
You can assign that anonymous class to a constant in order to name it:
```
Foo = Struct.new(:foo)
#=> Foo
Foo.ancestors
#=> [Foo, Struct, Enumerable, Object, Kernel, BasicObject]
```
That's the regular way to create a `Struct` subclass.
Your legacy code on the other hand seems to contain something like this:
```
class Foo < Struct.new(:foo)
end
```
`Struct.new` creates an anonymous class (it's not assigned to a constant) and `Foo` subclasses it, which results in:
```
Foo.ancestors
#=> [Foo, #, Struct, Enumerable, Object, Kernel, BasicObject]
```
Apparently, the anonymous class doesn't serve any purpose.
It's like:
```
class Bar
end
class Foo < Bar # or Foo = Class.new(Bar)
end
Foo.ancestors
#=> [Foo, Bar, Object, Kernel, BasicObject]
```
as opposed to:
```
class Bar
end
class Foo < Class.new(Bar)
end
Foo.ancestors
#=> [Foo, #, Bar, Object, Kernel, BasicObject]
```
The anonymous class returned by `Class.new(Bar)` in the latter example is not assigned to a constant and therefore neither used nor needed.
Upvotes: 5 [selected_answer]
|
2018/03/15
| 940 | 2,761 |
<issue_start>username_0: I am using Windows 10. I am unable to configure Visual Studio 2017 to run basic Gstreamer tutorials. I am getting errors like 'cannot open gst/gst.h'. I am using gstreamer 1.0.
Please help.<issue_comment>username_1: Hello my friend.
First of you need to download the library from <https://gstreamer.freedesktop.org/data/pkg/windows/>
You need to download and install both installers for developers and non-developers.
For instance for 1.14 it is the now latest version,
* **gstreamer-1.0-devel-x86-1.14.1.msi**
* **gstreamer-1.0-x86-1.14.1.msi**
You will install and setup both of them in the same directory like `C:\gstreamer`. (I guess gstreamer automatically adds its `/bin` to the Path environment. If not just ask it.)
After that you will open your Visual Studio. Create your C++ project. Create your `main.cpp` file. Right click on your project and click properties.
We need to do 3 steps:
1. Include the necessary directory paths.
2. Define the where the `.lib` paths are.
3. Specify which `.libs` you want to use.
After clicking properties:
1. C/C++ -> Additional Include Directories -> define your include paths such as
```
C:\gstreamer\1.0\x86_64\lib\glib-2.0\include;C:\gstreamer\1.0\x86_64\include\gstreamer-1.0;C:\gstreamer\1.0\x86_64\include\glib-2.0\;C:\gstreamer\1.0\x86_64\include\glib-2.0\glib;%(AdditionalIncludeDirectories)
```
2. Linker -> General -> Adding Library Directories -> write your lib directory path such as
```
C:\gstreamer\1.0\x86_64\lib;%(AdditionalLibraryDirectories)
```
3. Linker -> Input -> Additional Dependencies -> Write your .lib files you want to use such as
```
gobject-2.0.lib;glib-2.0.lib;gstreamer-1.0.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)
```
`gobject-2.0.lib;glib-2.0.lib;gstreamer-1.0.lib` are the ones we added, others are done by default.
That's all. You can just write in your `main.cpp` file
`#include` and use your GStreamer Library
I think this will work for almost all libraries.
Upvotes: 4 <issue_comment>username_2: I'd prefer to comment on username_1's answer.. but don't have the reputation yet.
Note that you probably want to use the "MSVC" version of the install files:
"gstreamer-1.0-devel-msvc-x86\_64-1.16.1.msi"
"gstreamer-1.0-msvc-x86\_64-1.16.1.msi"
These are new since his/her answer, and include .pbd debugging files made for debugging in visual studio.
Upvotes: 2 <issue_comment>username_3: Also we need add dll path in the properties
1. Go to Properties->Debugging->Environment
2. Add PATH=C:\path\where\gstreamer\dll\is;$(PATH)
For ex- PATH=C:\gstreamer\1.0\msvc\_x86\_64\bin;$(PATH)
Upvotes: 1
|
2018/03/15
| 980 | 3,094 |
<issue_start>username_0: I created a mySQL database with phpMyAdmin in my local server. In this database I store the names and the favourite NBA teams of my friends.This is obviously a many-to-many relationship. For this reason, I run the followed script in MySQL to create the appropriate tables for this database:
```
CREATE TABLE `friends` (
`id` int(4) NOT NULL AUTO_INCREMENT,
`name` varchar(30) NOT NULL,
PRIMARY KEY (`id`)
);
CREATE TABLE `teams` (
`id` int(4) NOT NULL AUTO_INCREMENT,
`name` varchar(30) NOT NULL,
PRIMARY KEY (`id`)
);
CREATE TABLE `relations` (
`friends_id` int(4) NOT NULL,
`teams_id` int(4) NOT NULL,
`status` varchar(30) NOT NULL
);
```
Obviously, I inserted some values to these tables but I do not provide extensively the source code here so as to save some space. An small piece of it is the following:
```
INSERT INTO `friends` (`id`, `name`)
VALUES
(1,'<NAME>'),
(2,'<NAME>');
INSERT INTO `teams` (`id`, `name`)
VALUES
(1,'C<NAME>'),
(2,'Boston Celtics'),
(3,'Houston Rockets');
INSERT INTO `relations` (`friends_id`, `teams_id`, `status`)
VALUES
(1,1, 'Current'),
(2,1, 'Current'),
(2,2, 'Past'),
(2,3, 'Past');
```
After running a PHP script that fetches the data from the database and print them, I want to have the following kind of valid json output for each of my friends:
```
{
"id": "1",
"name": "<NAME>",
"Current team": ["Boston Celtics"]
"Past team": [ "Cleveland Cavaliers", "Houston Rockets"]
}
```
How can I make this array of favourite teams for each person with MySQL?
P.S. My current question is inspired by the following answered question: [How to join arrays with MySQL from 3 tables of many-to-many relationship](https://stackoverflow.com/questions/49279952/how-to-join-arrays-with-mysql-from-3-tables-of-many-to-many-relationship)<issue_comment>username_1: ```
SELECT
CONCAT(
"{"
, '"id"' , ":" , '"' , friends.id , '"' , ","
, '"name"' , ":" , '"' , friends.name , '"' , ","
,
CASE
WHEN relations.status = 'Current'
THEN CONCAT('"CurrentTeam":["', teams.name ,'"]')
ELSE CONCAT('"pastTeam": ' , '[' , GROUP_CONCAT( '"',teams.name, '"'),']' )
END
, "}"
)
AS json
FROM
friends
INNER JOIN
relations
ON
friends.id = relations.friends_id
INNER JOIN
teams
ON
relations.teams_id = teams.id
group by friends.id,relations.status
```
<http://sqlfiddle.com/#!9/694bc69/23>
Upvotes: 2 [selected_answer]<issue_comment>username_2: For these type of query you have to use the mysql version 5.7 using these you are able to use the json type & function in query..
Answer :
```
SELECT GROUP_CONCAT( JSON_OBJECT(
'id', f.id, 'fname', f.name, 'tname', t.name, 'current_status', r.status), (SELECT JSON_OBJECT('name', tm.name)
FROM teams tm, relations re
WHERE tm.id = re.teams_id
AND f.id = re.friends_id
AND re.status = "Past"))
FROM teams t, relations r, friends f
WHERE t.id = r.teams_id
AND f.id = r.friends_id
AND r.status = "Current"
```
Upvotes: 0
|
2018/03/15
| 468 | 1,802 |
<issue_start>username_0: this is how i put my emitter:
```
func addParticle(at: CGPoint) {
let emitter = SKEmitterNode(fileNamed: "hit.sks")
emitter?.position = at
emitter?.zPosition = 10
scene.addChild(emitter!)
scene.run(SKAction.wait(forDuration: 1), completion: {
emitter?.removeFromParent()
})
}
```
and sometimes i have a performance lag, time profiler shows me that i am having sks file delay (file decoding etc).
is there any way i can avoid this?<issue_comment>username_1: You're not actually preloading the particle system. You're creating a new one, each time, and removing it (and causing there to be no reference to it) at the end, so it gets GC'd.
Instead, add the particle system to a node that's offscreen, and when you need it, move it back into the scene, where you need/want it, then move it back offscreen when you no longer need it.
This will prevent any need to create a particle system, wind it up and get it running, etc.
You'll just need to play and pause it... and move it.
You can pause a particle system directly, or by pausing its parent node, so it's ready at a state you want it to be in when you bring it back onscreen.
Read about more of this here: <https://developer.apple.com/documentation/spritekit/skemitternode/1398027-advancesimulationtime>
Upvotes: 2 <issue_comment>username_2: ok so i sorted it out by creating a particle manager that creates all the needed particles for the game. and when i need one of them i just use method .copy() as? SKEmitterNode. of course particle manager is a smart class that doing all the work (resetting animation and starting) and providing ready to use emitter.
this way - no lag, no time for decoding/initializing etc
hope it will help someone
Upvotes: 0
|
2018/03/15
| 314 | 1,010 |
<issue_start>username_0: Hello I would like to fill gradually span elements with values of array. Can you help me please?
```
var array=["apple","banana","cucumber"];
$("span.gt").each(function(){
$(this).text(array[?]);
});
```
Output should look like this:
```
apple
banana
cucumber
```<issue_comment>username_1: You get the `index` value in the function for `each` loop. And since the number of `span` elements is equal to the number of items in the `array` variable you can use that `index` value to set the array values in the `span` elements:
```js
var array=["apple","banana","cucumber"];
$("span.gt").each(function(index){
$(this).text(array[index]);
});
```
```html
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: 1. You can use for loop.
2. Use the increment number as index of the span and index of array
```js
var array = ["apple", "banana", "cucumber"];
for (var i = 0; i < array.length; i++) {
$('span').eq(i).text(array[i]);
}
```
```html
```
Upvotes: 0
|
2018/03/15
| 1,312 | 4,914 |
<issue_start>username_0: While working with large codebase for legacy applications, whenever I have seen a piece of code that checks whether an internet connection is active or not, I have mostly seen the uses of functions such as:
```
InternetCheckConnection(L"http://www.google.com",FLAG_ICC_FORCE_CONNECTION,0)
//or
BOOL bConnected = IsNetworkAlive(&dwSens)
//or
InternetGetConnectedState(&dwReturnedFlag, 0)
//or some other functions
```
but there exists a very very simple way to do this where you wouldn't need to include other header files of write code, that is:
```
if (system("ping www.google.com"))
```
My question is that what are the drawbacks, if any, to use `ping` from the code when I need to see if a connection is available or not?
Assuming that `ping` is not going to be disabled on the machines where my software is going to run.<issue_comment>username_1: The drawback with `system("ping www.google.com")` is twofold:
1. If someone replaced the system `ping` command with their own, it could give you the wrong results [and if the process calling `ping` is running with extra privileges, it could do something "interesting" with that privilege]. This is generic for any `system` operation.
2. You are starting another process, which then has to run and shut down before you get the answer [and of course do more or less the same things that `InternetCheckConnection` does - look up the name, translate it to an IP address, send packet to that address, wait for a response, interpret that response, and so on].
Upvotes: 3 [selected_answer]<issue_comment>username_2: According to Microsoft API document **[InternetCheckConnection](https://learn.microsoft.com/en-us/windows/win32/api/wininet/nf-wininet-internetcheckconnectiona)** is deprecated.
>
> [InternetCheckConnection is available for use in the operating systems specified in the Requirements section. It may be altered or unavailable in subsequent versions. Instead, use NetworkInformation.GetInternetConnectionProfile or the NLM Interfaces. ]
>
>
>
Instead of this API we can use **[INetworkListManager](https://learn.microsoft.com/en-us/windows/win32/api/netlistmgr/nn-netlistmgr-inetworklistmanager)** interface to check whether Internet is connected or not for windows platform.
Here below is the win32 codebase :
```
#include
#include // include the base COM header
#include
// Instruct linker to link to the required COM libraries
#pragma comment(lib, "ole32.lib")
using namespace std;
enum class INTERNET\_STATUS
{
CONNECTED,
DISCONNECTED,
CONNECTED\_TO\_LOCAL,
CONNECTION\_ERROR
};
INTERNET\_STATUS IsConnectedToInternet();
int main()
{
INTERNET\_STATUS connectedStatus = INTERNET\_STATUS::CONNECTION\_ERROR;
connectedStatus = IsConnectedToInternet();
switch (connectedStatus)
{
case INTERNET\_STATUS::CONNECTED:
cout << "Connected to the internet" << endl;
break;
case INTERNET\_STATUS::DISCONNECTED:
cout << "Internet is not available" << endl;
break;
case INTERNET\_STATUS::CONNECTED\_TO\_LOCAL:
cout << "Connected to the local network." << endl;
break;
case INTERNET\_STATUS::CONNECTION\_ERROR:
default:
cout << "Unknown error has been occurred." << endl;
break;
}
}
INTERNET\_STATUS IsConnectedToInternet()
{
INTERNET\_STATUS connectedStatus = INTERNET\_STATUS::CONNECTION\_ERROR;
HRESULT hr = S\_FALSE;
try
{
hr = CoInitialize(NULL);
if (SUCCEEDED(hr))
{
INetworkListManager\* pNetworkListManager;
hr = CoCreateInstance(CLSID\_NetworkListManager, NULL, CLSCTX\_ALL, \_\_uuidof(INetworkListManager), (LPVOID\*)&pNetworkListManager);
if (SUCCEEDED(hr))
{
NLM\_CONNECTIVITY nlmConnectivity = NLM\_CONNECTIVITY::NLM\_CONNECTIVITY\_DISCONNECTED;
VARIANT\_BOOL isConnected = VARIANT\_FALSE;
hr = pNetworkListManager->get\_IsConnectedToInternet(&isConnected);
if (SUCCEEDED(hr))
{
if (isConnected == VARIANT\_TRUE)
connectedStatus = INTERNET\_STATUS::CONNECTED;
else
connectedStatus = INTERNET\_STATUS::DISCONNECTED;
}
if (isConnected == VARIANT\_FALSE && SUCCEEDED(pNetworkListManager->GetConnectivity(&nlmConnectivity)))
{
if (nlmConnectivity & (NLM\_CONNECTIVITY\_IPV4\_LOCALNETWORK | NLM\_CONNECTIVITY\_IPV4\_SUBNET | NLM\_CONNECTIVITY\_IPV6\_LOCALNETWORK | NLM\_CONNECTIVITY\_IPV6\_SUBNET))
{
connectedStatus = INTERNET\_STATUS::CONNECTED\_TO\_LOCAL;
}
}
pNetworkListManager->Release();
}
}
CoUninitialize();
}
catch (...)
{
connectedStatus = INTERNET\_STATUS::CONNECTION\_ERROR;
}
return connectedStatus;
}
```
Upvotes: 1 <issue_comment>username_3: If you are using `boost`, you can do the following:
```
static bool has_internet(net::io_context& ioc) {
tcp::resolver resolver(boost::asio::make_strand(ioc));
boost::beast::error_code ec;
auto endpoints = resolver.resolve("google.com", "80", ec);
if (endpoints.empty() || ec.failed())
return false;
return true;
}
```
Upvotes: 1
|
2018/03/15
| 1,175 | 4,581 |
<issue_start>username_0: I wanted to add a generic create form for every model in my app. After repeating the same few lines over and over only changing the model name the DRY principle called out to me. I came up with the following method to dynamically add a form, view, and route for each model in my app.
**forms.py**
```
from django import forms
from . import models
import inspect
from django.db.models.base import ModelBase
# Add ModelForm for each model
for name, obj in inspect.getmembers(models):
if inspect.isclass(obj) and isinstance(obj, ModelBase):
vars()[name + "ModelForm"] = forms.modelform_factory(obj, exclude=())
```
**views.py**
```
from . import forms
from . import models
import inspect
from django.db.models.base import ModelBase
def form_factory(request, form_type, form_template, redirect_url='index', save=True):
if request.method == "POST":
form = form_type(request.POST)
if form.is_valid():
if save:
form.save()
return HttpResponseRedirect(reverse(redirect_url))
else:
form = form_type()
return render(request, form_template, {'form': form})
# Add view for each model
for name, obj in inspect.getmembers(models):
if inspect.isclass(obj) and isinstance(obj, ModelBase):
form = getattr(forms, name + "ModelForm")
func = lambda request: form_factory(request, form, 'core/create.html')
name = 'create_' + name.lower()
vars()[name] = func
```
**urls.py**
```
from django.urls import path
from . import views
from . import models
import inspect
from django.db.models.base import ModelBase
existing_urls = []
for urlpattern in urlpatterns:
existing_urls.append(urlpattern.pattern._route)
# Add url for each model
for name, obj in inspect.getmembers(models):
if inspect.isclass(obj) and isinstance(obj, ModelBase):
name = name.lower()
url = name + '/new'
if url in existing_urls:
continue
view = getattr(views, 'create_' + name)
url_name = 'create-' + name
urlpatterns.append(path(url, view, name=url_name))
```
Is this a bad idea? It feels wrong but I can't think of any concrete reason not to do this.<issue_comment>username_1: I can't really see the point of this level of complexity.
`form_factory` works just as well as a concrete generic view, which takes the parameters and is called directly from the URL. The URLconf can then be simplified to iterate through the models and add a pattern for each which passes those paraemters.
Then you can further simplify things by using actual generic views. These can be instantiated directly in the URLconf without any need to define subclasses in views.py. What's more, the CreateView is capable of building a form itself, without any need for defining in forms.py. So you can get rid of both of those files.
One final simplification is to use the actual API for getting model classes: `django.apps.apps.get_models()`. So all you need is:
```
from django.views.generic.edit import CreateView
from django.apps import apps
for model in apps.get_models():
urlpatterns += path(
'{}/new'.format(model._meta.model_name),
CreateView.as_view(
model=model,
template_name='core/create.html',
fields='__all__',
success_url=reverse_lazy('index')
),
name='create-{}'.format(model._meta.model_name)
)
```
Upvotes: 2 <issue_comment>username_2: username_1's answer is almost correct but with a small error. The simplest way to dynamically create views ended up being:
```
from django.views.generic.edit import CreateView
from django.apps import apps
for model in apps.get_app_config('appname').get_models():
route = path(
'{}/new'.format(model._meta.model_name),
CreateView.as_view(
model=model,
template_name='core/create.html',
fields='__all__',
success_url=reverse_lazy('index')
),
name='create-{}'.format(model._meta.model_name)
)
urlpatterns.append(route)
```
The addition of `get_app_config('appname')` ensures that only models defined in the desired app get views created for them. Without get\_app\_config all models have views created for them, including things defined by django that you should not have views for.
Using `urlpatterns.append(route)` is the correct way to append to a python list. The previous answer results in an error.
I suggested an edit but it was rejected. =(
Upvotes: 2 [selected_answer]
|
2018/03/15
| 643 | 2,543 |
<issue_start>username_0: I have database like this:
**database**

Can you help me to print **teamRegion** like this:
Spain
England
Germany
Thanks...<issue_comment>username_1: I can't really see the point of this level of complexity.
`form_factory` works just as well as a concrete generic view, which takes the parameters and is called directly from the URL. The URLconf can then be simplified to iterate through the models and add a pattern for each which passes those paraemters.
Then you can further simplify things by using actual generic views. These can be instantiated directly in the URLconf without any need to define subclasses in views.py. What's more, the CreateView is capable of building a form itself, without any need for defining in forms.py. So you can get rid of both of those files.
One final simplification is to use the actual API for getting model classes: `django.apps.apps.get_models()`. So all you need is:
```
from django.views.generic.edit import CreateView
from django.apps import apps
for model in apps.get_models():
urlpatterns += path(
'{}/new'.format(model._meta.model_name),
CreateView.as_view(
model=model,
template_name='core/create.html',
fields='__all__',
success_url=reverse_lazy('index')
),
name='create-{}'.format(model._meta.model_name)
)
```
Upvotes: 2 <issue_comment>username_2: username_1's answer is almost correct but with a small error. The simplest way to dynamically create views ended up being:
```
from django.views.generic.edit import CreateView
from django.apps import apps
for model in apps.get_app_config('appname').get_models():
route = path(
'{}/new'.format(model._meta.model_name),
CreateView.as_view(
model=model,
template_name='core/create.html',
fields='__all__',
success_url=reverse_lazy('index')
),
name='create-{}'.format(model._meta.model_name)
)
urlpatterns.append(route)
```
The addition of `get_app_config('appname')` ensures that only models defined in the desired app get views created for them. Without get\_app\_config all models have views created for them, including things defined by django that you should not have views for.
Using `urlpatterns.append(route)` is the correct way to append to a python list. The previous answer results in an error.
I suggested an edit but it was rejected. =(
Upvotes: 2 [selected_answer]
|
2018/03/15
| 782 | 3,107 |
<issue_start>username_0: There are two tables A and B with same structure (number of columns, column names etc.). There is no primary key constraint for both A and B. Some of the columns values can be null but not mentioned as a constraint.
Creation of table looks like below
```
CREATE TABLE IF NOT EXISTS TableA
( col1 INT,
col2 VARCHAR(50)
col3 BIGINT )
```
I need to delete rows in A which are in B i.e `A = A - B`
There are around 100 columns in the original table (I have simplified it above). So listing all the columns is not desirable.
How do I do this task?
I had to add rows from another table C which I did by using INSERT INTO.
```
INSERT INTO tableA VALUES
(
SELECT * From tableC
EXCEPT
SELECT * from tableA
)
```<issue_comment>username_1: I can't really see the point of this level of complexity.
`form_factory` works just as well as a concrete generic view, which takes the parameters and is called directly from the URL. The URLconf can then be simplified to iterate through the models and add a pattern for each which passes those paraemters.
Then you can further simplify things by using actual generic views. These can be instantiated directly in the URLconf without any need to define subclasses in views.py. What's more, the CreateView is capable of building a form itself, without any need for defining in forms.py. So you can get rid of both of those files.
One final simplification is to use the actual API for getting model classes: `django.apps.apps.get_models()`. So all you need is:
```
from django.views.generic.edit import CreateView
from django.apps import apps
for model in apps.get_models():
urlpatterns += path(
'{}/new'.format(model._meta.model_name),
CreateView.as_view(
model=model,
template_name='core/create.html',
fields='__all__',
success_url=reverse_lazy('index')
),
name='create-{}'.format(model._meta.model_name)
)
```
Upvotes: 2 <issue_comment>username_2: username_1's answer is almost correct but with a small error. The simplest way to dynamically create views ended up being:
```
from django.views.generic.edit import CreateView
from django.apps import apps
for model in apps.get_app_config('appname').get_models():
route = path(
'{}/new'.format(model._meta.model_name),
CreateView.as_view(
model=model,
template_name='core/create.html',
fields='__all__',
success_url=reverse_lazy('index')
),
name='create-{}'.format(model._meta.model_name)
)
urlpatterns.append(route)
```
The addition of `get_app_config('appname')` ensures that only models defined in the desired app get views created for them. Without get\_app\_config all models have views created for them, including things defined by django that you should not have views for.
Using `urlpatterns.append(route)` is the correct way to append to a python list. The previous answer results in an error.
I suggested an edit but it was rejected. =(
Upvotes: 2 [selected_answer]
|
2018/03/15
| 870 | 3,415 |
<issue_start>username_0: I have a static json file I need to serve with content-type: application/json to allow some deeplinking from my rails app to the corresponding android app. For this google is expecting a url to a static file which looks like this: `domain.com/.well-known/assetlinks.json` and the correct content-type.
So in my rails app I have the following file `app/views/static/assetlinks.json`.
And in my routes.rb I have:
```
...
resources :static, except: [:new, :create, :edit, :show, :update, :destroy ]
get '.well-known/assetlinks.json' => 'static#assetlinks'
...
```
If i then load `domain.com/.well-known/assetlinks.json` in my browser it is served correctly with the header set to Content-Type: application/json. But if I use hurl.it or postman with a get request for the same URL I get a response with the .html.erb template with the assetlinks.json content baked into it and Content-Type: text/html; charset=utf-8.
What kind of sorcery is this? How can I make it consistent to always serve only the JSON with the correct content-type?<issue_comment>username_1: I can't really see the point of this level of complexity.
`form_factory` works just as well as a concrete generic view, which takes the parameters and is called directly from the URL. The URLconf can then be simplified to iterate through the models and add a pattern for each which passes those paraemters.
Then you can further simplify things by using actual generic views. These can be instantiated directly in the URLconf without any need to define subclasses in views.py. What's more, the CreateView is capable of building a form itself, without any need for defining in forms.py. So you can get rid of both of those files.
One final simplification is to use the actual API for getting model classes: `django.apps.apps.get_models()`. So all you need is:
```
from django.views.generic.edit import CreateView
from django.apps import apps
for model in apps.get_models():
urlpatterns += path(
'{}/new'.format(model._meta.model_name),
CreateView.as_view(
model=model,
template_name='core/create.html',
fields='__all__',
success_url=reverse_lazy('index')
),
name='create-{}'.format(model._meta.model_name)
)
```
Upvotes: 2 <issue_comment>username_2: username_1's answer is almost correct but with a small error. The simplest way to dynamically create views ended up being:
```
from django.views.generic.edit import CreateView
from django.apps import apps
for model in apps.get_app_config('appname').get_models():
route = path(
'{}/new'.format(model._meta.model_name),
CreateView.as_view(
model=model,
template_name='core/create.html',
fields='__all__',
success_url=reverse_lazy('index')
),
name='create-{}'.format(model._meta.model_name)
)
urlpatterns.append(route)
```
The addition of `get_app_config('appname')` ensures that only models defined in the desired app get views created for them. Without get\_app\_config all models have views created for them, including things defined by django that you should not have views for.
Using `urlpatterns.append(route)` is the correct way to append to a python list. The previous answer results in an error.
I suggested an edit but it was rejected. =(
Upvotes: 2 [selected_answer]
|
2018/03/15
| 603 | 2,189 |
<issue_start>username_0: I have a JSON string which is parsed and a typecaseted to a `map`. I'm using this map to get a `List[Map[String, Any]]`. Here to make my code error free I have used `getOrElse` while type casting.
JSON string looks similar to
```
{
"map-key" : [
{
"list-object-1-key" : "list-object-1-value"
},
{
"list-object-2-key" : "list-object-2-value"
},
]
}
```
My code
```
val json = JSON.parseFull(string) match {
case Some(e) =>
val list = e.asInstanceOf[Map[String, Any]]
.getOrElse("map-key", List[Map[String,Any]]) // Error here
val info = list.asInstanceOf[List[Map[String, Any]]]
//iterate over each element in the list and perform my operations
case None => string
}
```
I can understand that whenever there is no result present in `list` object then `info` object is repeated code.
How can I improve this programme by giving the default value to `list` object?<issue_comment>username_1: Unfortunately i don't have the environment at this machine but try something like that
first thing you need to convert json to map
```
def jsonStrToMap(jsonStr: String): Map[String, Any] = {
implicit val formats = org.json4s.DefaultFormats
parse(jsonStr).extract[Map[String, Any]]
}
```
and the second thing you will need to iterate over map to get values of list
```
val list= jsonStrToMap.map{ case(k,v) => (k.getBytes, v) }. toList
```
Upvotes: 0 <issue_comment>username_2: Your default value is wrong. You're passing a type, not an empty list.
```
e.asInstanceOf[Map[String, Any]].getOrElse("map-key", List.empty[Map[String,Any]])
```
Upvotes: 1 [selected_answer]<issue_comment>username_3: Do it in more functional way, without `asInstanceOf`:
```
val parsed = JSON.parseFull(string)
parsed match {
case Some(e: Map[String, Any]) =>
e.get("map-key") match {
case Some(a: List[Any]) =>
a.foreach {
case inner: Map[String, Any] => println(inner.toList)
}
case _ =>
}
case None => string
}
```
Upvotes: 2
|
2018/03/15
| 697 | 1,589 |
<issue_start>username_0: I have two vectors of TRUE/FALSE values:
```
x1 <- c(TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE,
FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE,
TRUE, FALSE, FALSE, FALSE, FALSE, FALSE)
x2 <- c(FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, FALSE, FALSE,
FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE,
FALSE, TRUE, FALSE, FALSE, FALSE, FALSE)
```
I want to take the indexes of TRUE values in `x1` and `x2` and fill the new vector with `0.75` and `-0.25` in those indexes respectively. All other cells will be `0`.
```
which(x1)
[1] 1 3 5 19
which(x2)
[1] 2 4 6 20
```
Therefore, the final vector will have `0.75` in indexes 1, 3, 5, 19 and `-0.25` in indexes 2, 4, 6, 20.
How can I achieve this in R?
**SOLUTION:**
I found the way to do it.
```
newx <- rep(0, length(x1))
newx [which(x1)] <- 0.75
newx [which(x2)] <- -0.25
```<issue_comment>username_1: This should work
```
x1_number <- ifelse(x1 == TRUE, yes = 0.75, no = 0)
x2_number <- ifelse(x2 == TRUE, yes = -1, no = 0)
df <- data.frame(your_column_name = x1_number + x2_number)
```
Upvotes: 0 <issue_comment>username_2: If `x1` is equal to `TRUE` we assign `.75`. For cases where `x1` is not `TRUE`, we look if `x2` is `TRUE` and assign the value `-.25`. In case both `x1` and `x2` equal `FALSE`, assign the value `0`.
```
ifelse(x1, 0.75, ifelse(x2, -.25, 0))
#[1] 0.75 -0.25 0.75 -0.25 0.75 -0.25 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.75 -0.25 0.00 0.00 0.00 0.00
```
Upvotes: 1
|
2018/03/15
| 559 | 1,877 |
<issue_start>username_0: How can we add row that contain serial number (S.No.) in Jtable which gets updated each time we delete the data of a table?
e.g.:
i have a jtable which contain data from database
```
Name Age Class
ram 14 9
hari 15 9
rama 15 10
```
i want it to be like this:
```
S.No. Name Age Class
1 Ram 14 9
2 hari 15 9
3 rama 15 10
```
And if i delete data of hari this table should look like this:
```
S.No. Name Age Class
1 Ram 14 9
2 rama 15 10
```<issue_comment>username_1: Do following thing:
* Move your code to a method in which you are filling the table
* Add `ActionPerformed` listener to your delete button
* In the implementation of `ActionPerformed` call that method again will refresh the table with new entries.
Hope this solves your problem.
Upvotes: 0 <issue_comment>username_2: This code assumes, that you maintain a list of all students and have everywhere in the class access to the table as well as the model.
```
public void stuff() {
deleteButton.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
list.remove(table.getSelectedRow());
removeRowsFromTable();
}
});
}
void removeRowsFromTable() throws Exception {
for (int i = table.getRowCount() - 1; i >= 0; i--) {
model.removeRow(i);
}
fillTable();
}
void fillTable() {
for (int i = 0; i < list.size(); i++) {
Student s = list.get(i);
Object[] newRow = new Object[] {i, s.getName(),s.getAge(), s.getClass()};
model.addRow(newRow);
}
}
```
Upvotes: 1 <issue_comment>username_3: ```
jTableModel.removeRow(rowIndex);
for(int i = rowIndex; i < jTableModel.getRowCount(); i++){
int val = (int) jTableModel.getValueAt(i, 0);
jTableModel.setValueAt(--val, i, 0);
}
```
Upvotes: 0
|
2018/03/15
| 484 | 1,681 |
<issue_start>username_0: I'm currently working on a kubernetes cluster. Cluster is working properly.
I need to establish communication between services without using proxy.
For example I have services below:
1. worker
2. app1
3. app2
4. app3
Worker needs to login to app containers directly via SSH and do some commands.
In docker-compose file it was easy by using links and then ssh app1, ssh app2.
How to do it in Kubernetes ?<issue_comment>username_1: You'll want to create a [headless Service](https://kubernetes.io/docs/concepts/services-networking/service#headless-services) (`spec.clusterIP: None`) selecting your app Pods. This will create a DNS entry (something like `my-svc.my-namespace.svc.cluster.local`) that will resolve to the set of IPs of the Pods selected by your Service. You can then loop through the returned list of Pod IPs and ssh into each.
More details can be found [here](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service#services).
Upvotes: 2 <issue_comment>username_2: There are pseudocode,
```
kind: Service
apiVersion: v1
metadata:
name: worker
labels:
app: worker
spec:
selector:
app: worker
ports:
- protocol: TCP
port: 22
targetPort: 22
type: NodePort
---
kind: Service
apiVersion: v1
metadata:
name: app1
labels:
app: app1
spec:
selector:
app: app1
ports:
- protocol: TCP
port: 22
targetPort: 22
type: ClusterIP
---
kind: Service
apiVersion: v1
metadata:
name: app2
labels:
app: app2
spec:
selector:
app: app2
ports:
- protocol: TCP
port: 22
targetPort: 22
type: ClusterIP
```
Then, on worker
```
ssh app1
ssh app2
```
Upvotes: 1
|
2018/03/15
| 1,791 | 5,062 |
<issue_start>username_0: First, let me just say I am a "tinkerer" when it comes to coding. I can usually dig around the internet, find a suggestion and implement it in short timing. This issue I have been investigating has turned out to be more difficult and for the first time, I've decided to post a help needed request.
I am working to incorporate a CSS based dropdown into my family's business website. The dropdown has a downward sliding affect as it transitions the visibility via CSS. On Firefox and Opera, the DD works with no problems (actually looks really nice). Unfortunately, it doesn't work on any IE or Edge browsers, even after researching and trying out multiple suggestions over the past week. I ran the CSS code through Lint (csslint.net) and it found no errors. I am wondering if the issue might have something to do with the browser extensions in the CSS translate/transform/transition code or if I've missed something with the CSS focus/focus-within/hover code.
Actual Site: <http://www.powerstone45.com/> The dropdown I am working on is the one that appears under "Product Category:" (Sorry for the rest of the page being in Japanese)
Here is the relevent code that drives the dropdown:
```css
.sub-menu-parent {
font-size: 12px;
margin: 0 !important;
display: block;
height: auto !important;
line-height: normal !important;
padding: 5px 10px !important;
text-decoration: none;
background-color: #FEFEFE;
color: #444444;
cursor: pointer;
}
.sub-menu {
visibility: hidden; /* hides sub-menu */
opacity: 0;
position: absolute;
top: 100%;
left: 0;
width: 100%;
background:#fff;
-webkit-transform: translateY(-2em);
-moz-transform: translateY(-2em);
-ms-transform: translateY(-2em);
-o-transform: translateY(-2em);
transform: translateY(-2em);
z-index: -1;
-webkit-transition: all 0.3s ease-in-out 0s, visibility 0s linear 0.3s, z-index 0s linear 0.01s;
-moz-transition: all 0.3s ease-in-out 0s, visibility 0s linear 0.3s, z-index 0s linear 0.01s;
-ms-transition: all 0.3s ease-in-out 0s, visibility 0s linear 0.3s, z-index 0s linear 0.01s;
-o-transition: all 0.3s ease-in-out 0s, visibility 0s linear 0.3s, z-index 0s linear 0.01s;
transition: all 0.3s ease-in-out 0s, visibility 0s linear 0.3s, z-index 0s linear 0.01s;
height: auto !important;
line-height: normal !important;
-webkit-box-shadow: 0 5px 20px 0 rgba(0,0,0,.1);
-moz-box-shadow: 0 5px 20px 0 rgba(0,0,0,.1);
-ms-box-shadow: 0 5px 20px 0 rgba(0,0,0,.1);
-o-box-shadow: 0 5px 20px 0 rgba(0,0,0,.1);
box-shadow: 0 5px 20px 0 rgba(0,0,0,.1);
}
.sub-menu-parent:focus .sub-menu,
.sub-menu-parent:focus-within .sub-menu,
.sub-menu-parent:hover .sub-menu {
visibility: visible; /* shows sub-menu */
opacity: 1;
z-index: 1;
-webkit-transform: translateY(0%);
-moz-transform: translateY(0%);
-ms-transform: translateY(0%);
-o-transform: translateY(0%);
transform: translateY(0%);
-webkit-transition-delay: 0s, 0s, 0.3s;
-moz-transition-delay: 0s, 0s, 0.3s;
-ms-transition-delay: 0s, 0s, 0.3s;
-o-transition-delay: 0s, 0s, 0.3s;
transition-delay: 0s, 0s, 0.3s;
}
/* presentational */
nav a { color: #444444; display: block; padding: 0.5em 1em; text-decoration: none; }
nav a:hover { color: #fff; background: #9264E0;}
nav ul,
nav ul li { list-style-type: none; padding: 0; margin: 0; }
nav > ul { background: #fff; text-align: center; }
nav > ul > li { display: inline-block; border-left: solid 1px #aaa; }
nav > ul > li:first-child { border-left: none; }
.sub-menu {
background: #fff;
}
#navdd {
position:relative;
width:100%;
margin-top:-15px;
display:block;
}
```
```html
* Bracelet
+ [Bracelet](prod_showcase.php?cid=br&rfsh=0)
+ [Necklace](prod_showcase.php?cid=ne&rfsh=0)
+ [Pendants](prod_showcase.php?cid=pe&rfsh=0)
+ [Other Products](prod_showcase.php?cid=ot&rfsh=0)
```
If someone could possibly look at my CSS code and help me sort through to the root cause of this disconnect, I'd be very appreciative!<issue_comment>username_1: ie not supprort :focus-within CSS pseudo-class
read this.<https://caniuse.com/#feat=css-focus-within>
it will work if you remove this `.sub-menu-parent:focus-within .sub-menu` from your css
```
.sub-menu-parent:focus .sub-menu,
.sub-menu-parent:hover .sub-menu {
visibility: visible; /* shows sub-menu */
opacity: 1;
z-index: 1;
-webkit-transform: translateY(0%);
-moz-transform: translateY(0%);
-ms-transform: translateY(0%);
-o-transform: translateY(0%);
transform: translateY(0%);
-webkit-transition-delay: 0s, 0s, 0.3s;
-moz-transition-delay: 0s, 0s, 0.3s;
-ms-transition-delay: 0s, 0s, 0.3s;
-o-transition-delay: 0s, 0s, 0.3s;
transition-delay: 0s, 0s, 0.3s;
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Please remove this .sub-menu-parent:focus-within .sub-menu, - line number 52 From dd\_style.css
Upvotes: 1
|
2018/03/15
| 485 | 1,624 |
<issue_start>username_0: My naming convention for storage account has to have a number at the end of the storage account name eg storageaccount01
When I create a new account I need to make sure I dont duplicate any storage name. I cant use the uniqueid for this project , it has to be the above naming convention.
Using powershell what is the best way to get the subscription storage accounts then check the names of the storage accounts and then create a storage account on the next available number . so for example if i retrieve the names of the storage accounts and we have storageaccount01, storageaccount02, this means the next available one will be storageaccount03. Whats the best way of doing this?<issue_comment>username_1: ie not supprort :focus-within CSS pseudo-class
read this.<https://caniuse.com/#feat=css-focus-within>
it will work if you remove this `.sub-menu-parent:focus-within .sub-menu` from your css
```
.sub-menu-parent:focus .sub-menu,
.sub-menu-parent:hover .sub-menu {
visibility: visible; /* shows sub-menu */
opacity: 1;
z-index: 1;
-webkit-transform: translateY(0%);
-moz-transform: translateY(0%);
-ms-transform: translateY(0%);
-o-transform: translateY(0%);
transform: translateY(0%);
-webkit-transition-delay: 0s, 0s, 0.3s;
-moz-transition-delay: 0s, 0s, 0.3s;
-ms-transition-delay: 0s, 0s, 0.3s;
-o-transition-delay: 0s, 0s, 0.3s;
transition-delay: 0s, 0s, 0.3s;
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Please remove this .sub-menu-parent:focus-within .sub-menu, - line number 52 From dd\_style.css
Upvotes: 1
|
2018/03/15
| 223 | 750 |
<issue_start>username_0: **main\_layout.xml**:
```
```
[](https://i.stack.imgur.com/jOlrO.png)
[](https://i.stack.imgur.com/5K4dH.png)
It looks like bounded to the edges *ConstraintLayout* doesn't consider **any** horizontal margins and just horizontally centeres the *TextView*<issue_comment>username_1: ```
```
Upvotes: 0 <issue_comment>username_2: That's because you have the width of the TextView set to `"wrap_content"`.
To force it to adapt to the constraints you need to use a width of `"0dp"`. This is the functional equivalent of `"match_constraint"` however that is not currently recognized as an option.
Upvotes: 3 [selected_answer]
|
2018/03/15
| 1,754 | 4,896 |
<issue_start>username_0: Im trying to use the sf package in R to see if sf object is within another sf object with the `st_within` function. My issue is with the output of this function which is sparse geometry binary predicate - `sgbp` and I need a vector as an output so that I can use the `dplyr` package afterwards for filtering. Here is a simplified example:
```
# object 1: I will test if it is inside object 2
df <- data.frame(lon = c(2.5, 3, 3.5), lat = c(2.5, 3, 3.5), var = 1) %>%
st_as_sf(coords = c("lon", "lat"), dim = "XY") %>% st_set_crs(4326) %>%
summarise(var = sum(var), do_union = F) %>% st_cast("LINESTRING")
# object 2: I will test if it contains object 1
box <- data.frame(lon = c(2, 4, 4, 2, 2), lat = c(2, 2, 4, 4,2), var = 1) %>%
st_as_sf(coords = c("lon", "lat"), dim = "XY") %>% st_set_crs(4326) %>%
summarise(var = sum(var), do_union = F) %>% st_cast("POLYGON")
# test 1
df$indicator <- st_within(df$geometry, box$geometry) # gives geometric binary predicate on pairs of sf sets which cannot be used
df <- df %>% filter(indicator == 1)
```
This gives Error: Column `indicator` must be a 1d atomic vector or a list.
I tried solving this problem below:
```
# test 2
df$indicator <- st_within(df$geometry, box$geometry, sparse = F) %>%
diag() # gives matrix that I convert with diag() into vector
df <- df %>% filter(indicator == FALSE)
```
This works, it removes the row that contains TRUE values but the process of making a matrix is very slow for my calculations since my real data contains many observations. Is there a way to make the output of `st_within` a character vector, or maybe a way to convert `sgbp` to a character vector compatible with `dplyr` without making a matrix?<issue_comment>username_1: Instead of using the `st_within` function directly, try using a `spatial join`.
Check out the following example how st\_joins works
```
library(sf)
library(tidyverse)
lines <-
data.frame(id=gl(3,2), x=c(-3,2,6,11,7,10), y=c(-1,6,-5,-9,10,5)) %>%
st_as_sf(coords=c("x","y"), remove=F) %>%
group_by(id) %>%
summarise() %>%
st_cast("LINESTRING")
yta10 <-
st_point(c(0, 0)) %>%
st_buffer(dist = 10) %>%
st_sfc() %>%
st_sf(yta = "10m")
```
With a left join all lines are kept, but you can see which of them that are located inside the polygon
```
lines %>% st_join(yta10, left=TRUE)
```
An inner join (left = FALSE) only keeps the ones inside
```
lines %>% st_join(yta10, left=FALSE)
```
The latter can also be obtained by
```
lines[yta10,]
```
Upvotes: 1 <issue_comment>username_2: The result of `is_within` is in truth a list column, so you can work your
way out of this by "unlisting" it. Something like this would work:
```r
library(dplyr)
library(sf)
# object 1: I will test if it is inside object 2 - to make this more interesting
# I added a second not-contained line
df <- data.frame(lon = c(2.5, 3, 3.5), lat = c(2.5, 3, 3.5), var = 1) %>%
st_as_sf(coords = c("lon", "lat"), dim = "XY") %>% st_set_crs(4326) %>%
summarise(var = sum(var), do_union = F) %>% st_cast("LINESTRING")
df2 <- data.frame(lon = c(4.5, 5, 6), lat = c(4.5, 5, 6), var = 2) %>%
st_as_sf(coords = c("lon", "lat"), dim = "XY") %>% st_set_crs(4326) %>%
summarise(var = sum(var), do_union = F) %>% st_cast("LINESTRING")
df3 <- rbind(df, df2)
# object 2: I will test if it contains object 1
box <- data.frame(lon = c(2, 4, 4, 2, 2), lat = c(2, 2, 4, 4,2), var = 1) %>%
st_as_sf(coords = c("lon", "lat"), dim = "XY") %>% st_set_crs(4326) %>%
summarise(var = sum(var), do_union = F) %>% st_cast("POLYGON")
plot(df3)
plot(st_geometry(box), add = TRUE)
```

```r
# see if the lines are within the box and build a data frame with results
is_within <- st_within(df3$geometry, box$geometry) %>%
lapply(FUN = function(x) data.frame(ind = length(x))) %>%
bind_rows()
# add the "indicator" to df3
df3 <- dplyr::mutate(df3, indicator = is_within$ind)
df3
#> Simple feature collection with 2 features and 2 fields
#> geometry type: LINESTRING
#> dimension: XY
#> bbox: xmin: 2.5 ymin: 2.5 xmax: 6 ymax: 6
#> epsg (SRID): 4326
#> proj4string: +proj=longlat +datum=WGS84 +no_defs
#> var indicator geometry
#> 1 3 1 LINESTRING (2.5 2.5, 3 3, 3...
#> 2 6 0 LINESTRING (4.5 4.5, 5 5, 6 6)
```
HTH
Created on 2018-03-15 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0).
Upvotes: 1 <issue_comment>username_3: Here is how you can get a logical vector from sparse geometry binary predicate:
```
df$indicator <- st_within(df, box) %>% lengths > 0
```
or to subset without creating a new variable:
```
df <- df[st_within(df, box) %>% lengths > 0,]
```
I cannot test on your large dataset unfortunately but please let me know if it is faster than matrix approach.
Upvotes: 5 [selected_answer]
|
2018/03/15
| 1,429 | 4,292 |
<issue_start>username_0: I have a table and I'm using AngularJS to display it, there's a clear button where if clicked all the rows will be deleted from the table. I am using splice to do this but when I try to splice it, only the 1st and 3rd row get splice.
How do I splice all of the rows?
```
self.row.length = 3;
for (var index=0; index < self.row.length; index++) {
if (self.row[index].DDelDetailId != 0) {
self.deletedRow.push(angular.copy(self.row[index]));
}
console.log("index: "+index+" Rows: "+self.row.length)
self.row.splice(index, 1)
}
```
I already looked at all the similar questions but none of them helped me. It can splice if the `self.row.length` is 1 but if it is greater than 1 it leaves 1 row.
Below is what was printed in the console log:
```
Index: 0 Rows: 3
Index: 1 Rows: 2
```
I push all the deleted row to `self.deletedRow` then if user clicks save then the deleted rows will be deleted in the database. Each row has a delete button so user can delete all rows or delete 1 specific row.<issue_comment>username_1: >
> if clicked all the rows will be deleted from the table
>
>
>
Why you are using splice for every row you can clear that whole array. So use `self.row=[]` instead of using `splice()`.
>
> As per the comment below: I actually push all the deleted row to self.deletedRow then if user clicks save then the delete rows
>
>
>
assign the row values to `self.deletedRow` before delete your all rows.
```
self.deleteAll=function()
{
self.deletedRow = self.row;
self.row = [];
}
```
this Above way for all rows
and this below way for selected rows
```
self.deleteSingleRow = function(currentObject)// currentObject is `ng-repeat` directive object and you should be pass to the `deleteSingleRow` in html
{
self.deletedRow.push(currentObject);
//do your delete service call and rebind the `row` array
}
```
Upvotes: 1 <issue_comment>username_2: As you're moving the index forward while deleting rows, you're skipping rows:
iteration 1:
```
index = 0
arr: [0, 1, 2]
arr.splice(0, 1) => arr: [1, 2] // deletes first item
```
iteration 2:
```
index = 1
arr: [1, 2]
arr.splice(1, 1) => arr: [1] // deletes second item
```
iteration 3:
```
index = 2
arr: [1]
arr.splice(2, 1) => arr [1] // tries to delete third item
```
If you delete the first item all the time, you won't skip anything:
```
arr.splice(0, 1)
```
It's also more efficient to remove all rows: `arr = []` or `arr.length = 0`,
Upvotes: 3 [selected_answer]<issue_comment>username_3: you can achieve this using splice itself
Here is the working example for whole requirement
```html
(function() {
var app = angular.module("myApp", []);
app.controller('testCtrl', function($scope) {
var self = this;
self.data = [{"Product":"Body Spray","Location":"USA","Dec-2017":"234","Jan-18":"789","Feb-18":"234","Mar-18":"789","Apr-18":"234"},{"Product":"Groceries","Location":"USA","Dec-2017":"234","Jan-18":"789","Feb-18":"234","Mar-18":"789","Apr-18":"234"},{"Product":"Ready Cook","Location":"USA","Dec-2017":"234","Jan-18":"789","Feb-18":"234","Mar-18":"789","Apr-18":"234"},{"Product":"Vegetables","Location":"USA","Dec-2017":"234","Jan-18":"789","Feb-18":"234","Mar-18":"789","Apr-18":"234"}];
self.deletedData = [];
self.duplicateData = angular.copy(self.data);
self.clearData = function(element){
if(element){
var index = self.data.indexOf(element);
if(index > -1){
self.deletedData.push(angular.copy(element));
self.data.splice(index, 1);
}
}
else{
self.deletedData = angular.copy(self.data);
self.data.splice(0, self.data.length);
}
};
self.resetData = function(element){
//The table order wont change
self.data = angular.copy(self.duplicateData);
//The table order will change in this
/\*angular.forEach(self.deletedData, function (item, index) {
self.data.push(item);
});\*/
self.deletedData = [];
};
});
}());
Delete All
Reset All
|
{{ key }}
|
| --- |
|
{{ row[key] }}
| Delete |
```
Upvotes: 1 <issue_comment>username_4: It is because self.row.length is changing dynamically as you are splicing.
use a temp variable and store length of your array to delete all rows
e.g
```
var temp = self.row.length;
for(var index=0;index
```
Upvotes: 0
|
2018/03/15
| 564 | 2,095 |
<issue_start>username_0: I've created a menu bar app, a NSMenu object using the Interface Builder (following [this](http://footle.org/WeatherBar/) tutorial). The menu has two items:
Start Commando
Stop Commando
How can I disable/enable the menu items when they're clicked? I've set disabled "Auto Enables Items" and I can manually enable/disable the items in the Attributes inspector, but how can I achieve the same thing when their functions are called?
When "Start Commando" is clicked I want the item to disable and "Stop Commando" to enable. And the other way around when "Stop Commando" is clicked.<issue_comment>username_1: Declare a BOOL value for instance
```
BOOL isActive
if(isActive)
{
//show menu
}
else
{
//hide your menu
}
```
also make BOOL true when your view dismiss
Upvotes: 0 <issue_comment>username_2: Swift provides with **setEnabled** property that can be used on NSMenuItem you are trying to enable or disable.
You can do the following :
```
@IBOutlet weak var startMenuItem: NSMenuItem!
startMenuItem.isEnabled = false or true
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You can try below code :
```
let menu = NSMenu();
menu.autoenablesItems = false
```
Upvotes: 2 <issue_comment>username_4: As others say, there is a `isEnabled` property for `NSMenuItem`s. One also needs to uncheck `Auto Enables Items` for that menu or sub-menu in the Attributes Inspector in Xcode, or through code, to allow the setting to take effect.
[](https://i.stack.imgur.com/3cstD.png)
To get it to change on selection, in the `IBAction` called for the menu item, likely in your `NSWindowController`, do something like this:
```
@IBAction private func myMenuAction(sender: NSMenuItem) {
sender.isEnabled = false
}
```
You will not be able to then select the menu item afterwards. I assume you re-enable it else where as so:
```
if let appDelegate = NSApplication.shared.delegate as? AppDelegate {
appDelegate.myMenuItem.isEnabled = true
}
```
Code untested.
Upvotes: 1
|
2018/03/15
| 709 | 2,527 |
<issue_start>username_0: ```js
//routes
const AppRoute = () => {
return (
);
};
export default AppRoute;
//store
const store = createStore(reducers, applyMiddleware(Promise));
ReactDOM.render(
,
document.getElementById("root")
);
```
I use react and redux.
I created a BookShow component to show data of one book. Data loads correctly but when I refresh the page I get this error:
Type Error: Cannot read property 'title' of undefined and hole state is undefined.
Why did this happen and how can I prevent it from happening?
this is my code
```js
import React from 'react';
import {connect} from 'react-redux'
const BookShow = props => {
if(!props){
return loading...
}
return (
{props.book.title}
------------------
{props.book.body}
{console.log(props)}
);
};
const mapStateToProps = (state, props) => {
return {
book: state.books.find((book) => {
return book.id === props.match.params.id
})
}
};
export default connect(mapStateToProps)(BookShow);
```<issue_comment>username_1: Declare a BOOL value for instance
```
BOOL isActive
if(isActive)
{
//show menu
}
else
{
//hide your menu
}
```
also make BOOL true when your view dismiss
Upvotes: 0 <issue_comment>username_2: Swift provides with **setEnabled** property that can be used on NSMenuItem you are trying to enable or disable.
You can do the following :
```
@IBOutlet weak var startMenuItem: NSMenuItem!
startMenuItem.isEnabled = false or true
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You can try below code :
```
let menu = NSMenu();
menu.autoenablesItems = false
```
Upvotes: 2 <issue_comment>username_4: As others say, there is a `isEnabled` property for `NSMenuItem`s. One also needs to uncheck `Auto Enables Items` for that menu or sub-menu in the Attributes Inspector in Xcode, or through code, to allow the setting to take effect.
[](https://i.stack.imgur.com/3cstD.png)
To get it to change on selection, in the `IBAction` called for the menu item, likely in your `NSWindowController`, do something like this:
```
@IBAction private func myMenuAction(sender: NSMenuItem) {
sender.isEnabled = false
}
```
You will not be able to then select the menu item afterwards. I assume you re-enable it else where as so:
```
if let appDelegate = NSApplication.shared.delegate as? AppDelegate {
appDelegate.myMenuItem.isEnabled = true
}
```
Code untested.
Upvotes: 1
|
2018/03/15
| 997 | 3,504 |
<issue_start>username_0: When I do something like:
```
(def x 123)
(future (def x 456))
```
The `def` in the second thread ends up modifying the value in the main thread. I understand this is not idiomatic and that I should be using atoms or something more complex. That aside, however, this is contrary to my expectation because I've read in various places that vars are "dynamic" or "thread-local".
So, what exactly is happening here? Is the second thread making an unsafe assignment, akin to what would happen if you did the equivalent in C? If so, does clojure "allow" for the possibility of other unsafe operations like appending to a list from multiple threads and ending up with an inconsistent data structure?<issue_comment>username_1: Thread safe modification of the root binding of vars can be done with `alter-var-root`:
```
(do (future (Thread/sleep 10000) (alter-var-root #'v inc)) (def v 2))
```
Calling `def` multiple times with the same name just overwrites the root binding where the last call wins.
However, in idiomatic Clojure `def` is to be used only at the top level (exceptions like macros aside).
Upvotes: 1 <issue_comment>username_2: First, [`def`](https://clojure.org/reference/special_forms#def) is a *special form* in Clojure, worth reading up about.
>
> I've read in various places that vars are "dynamic" or "thread-local".
>
>
>
They can be but this isn't the typical usage. From the guide:
>
> `def` always applies to the root binding, even if the var is thread-bound at the point where def is called.
>
>
>
To demonstrate this:
```
(def ^:dynamic foo 1)
(binding [foo 2] ;; thread-local binding for foo
(prn foo) ;; "2"
(def foo 3) ;; re-defs global foo var
(prn foo)) ;; "2" (still thread-local binding value)
(prn foo) ;; "3" (now refers to replaced global var)
```
And with multiple threads:
```
(def ^:dynamic foo 1)
(future
(Thread/sleep 500)
(prn "from other thread" foo))
(binding [foo 2]
(prn "bound, pre-def" foo)
(def foo 3)
(Thread/sleep 1000)
(prn "bound, post-def" foo))
(prn "finally" foo)
;; "bound, pre-def" 2
;; "from other thread" 3
;; "bound, post-def" 2
;; "finally" 3
```
>
> So, what exactly is happening here? Is the second thread making an unsafe assignment, akin to what would happen if you did the equivalent in C?
>
>
>
Depends on your definition of unsafe, but it's certainly *uncoordinated and non-atomic* with regard to multiple threads. You could use `alter-var-root` to change the var atomically, or use something like a `ref` or `atom` for mutable state.
>
> If so, does clojure "allow" for the possibility of other unsafe operations like appending to a list from multiple threads and ending up with an inconsistent data structure?
>
>
>
Not with its persistent data structures, which are conceptually *copy-on-write* (though the copies share common knowledge for efficiency). This confers a lot of benefits when writing multi-threaded code in Clojure and other functional languages. When you append to a (persistent) list data structure, you're not modifying the structure in-place; you get back a new "copy" of the structure with your change. How you handle that new value, presumably by sticking it in some global "bucket" like a var, ref, atom, etc., determines the "safety" or atomicity of the change.
You *could* however easily modify one of Java's thread-unsafe data structures from multiple threads and end up in a bad place.
Upvotes: 4 [selected_answer]
|
2018/03/15
| 1,025 | 3,538 |
<issue_start>username_0: I'm creating a general purpose queue on firebase cloud functions to run huge list of task. I was wondering if i can use `.on('child_added')` to get new task pushed to the queue.
Problem i was getting is that my queue is breaking in a middle randomly after 10 mins or sometimes 15 mins.
```
admin.database().ref('firebase/queues/').on('child_added', async snap => {
let data = snap.val();
console.log(data);
try {
await queueService.start(data);
} catch (e) {
console.log(e.message);
}
snap.ref.remove();
});
```
Or shall i go back to use triggers?
```
functions.database.ref('firebase/queues/{queueId}').onCreate(event => {
return firebaseQueueTrigger(event);
});
```<issue_comment>username_1: Thread safe modification of the root binding of vars can be done with `alter-var-root`:
```
(do (future (Thread/sleep 10000) (alter-var-root #'v inc)) (def v 2))
```
Calling `def` multiple times with the same name just overwrites the root binding where the last call wins.
However, in idiomatic Clojure `def` is to be used only at the top level (exceptions like macros aside).
Upvotes: 1 <issue_comment>username_2: First, [`def`](https://clojure.org/reference/special_forms#def) is a *special form* in Clojure, worth reading up about.
>
> I've read in various places that vars are "dynamic" or "thread-local".
>
>
>
They can be but this isn't the typical usage. From the guide:
>
> `def` always applies to the root binding, even if the var is thread-bound at the point where def is called.
>
>
>
To demonstrate this:
```
(def ^:dynamic foo 1)
(binding [foo 2] ;; thread-local binding for foo
(prn foo) ;; "2"
(def foo 3) ;; re-defs global foo var
(prn foo)) ;; "2" (still thread-local binding value)
(prn foo) ;; "3" (now refers to replaced global var)
```
And with multiple threads:
```
(def ^:dynamic foo 1)
(future
(Thread/sleep 500)
(prn "from other thread" foo))
(binding [foo 2]
(prn "bound, pre-def" foo)
(def foo 3)
(Thread/sleep 1000)
(prn "bound, post-def" foo))
(prn "finally" foo)
;; "bound, pre-def" 2
;; "from other thread" 3
;; "bound, post-def" 2
;; "finally" 3
```
>
> So, what exactly is happening here? Is the second thread making an unsafe assignment, akin to what would happen if you did the equivalent in C?
>
>
>
Depends on your definition of unsafe, but it's certainly *uncoordinated and non-atomic* with regard to multiple threads. You could use `alter-var-root` to change the var atomically, or use something like a `ref` or `atom` for mutable state.
>
> If so, does clojure "allow" for the possibility of other unsafe operations like appending to a list from multiple threads and ending up with an inconsistent data structure?
>
>
>
Not with its persistent data structures, which are conceptually *copy-on-write* (though the copies share common knowledge for efficiency). This confers a lot of benefits when writing multi-threaded code in Clojure and other functional languages. When you append to a (persistent) list data structure, you're not modifying the structure in-place; you get back a new "copy" of the structure with your change. How you handle that new value, presumably by sticking it in some global "bucket" like a var, ref, atom, etc., determines the "safety" or atomicity of the change.
You *could* however easily modify one of Java's thread-unsafe data structures from multiple threads and end up in a bad place.
Upvotes: 4 [selected_answer]
|
2018/03/15
| 2,521 | 9,768 |
<issue_start>username_0: I am trying to submit a transaction to Hyperledger Sawtooth v1.0.1 using javascript to a validator running on localhost. The code for the post request is as below:
```
request.post({
url: constants.API_URL + '/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => {
if (err) {
console.log(err);
return cb(err)
}
console.log(response.body);
return cb(null, response.body);
});
```
The transaction gets processed when submitted from an backend nodejs application, but it returns an `OPTIONS http://localhost:8080/batches 405 (Method Not Allowed)` error when submitted from client. These are the options that I have tried:
1. Inject `Access-Control-Allow-*` headers into the response using an extension: The response still gives the same error
2. Remove the custom header to bypass preflight request: This makes the validator throw an error as shown:
```
...
sawtooth-rest-api-default | KeyError: "Key not found: 'Content-Type'"
sawtooth-rest-api-default | [2018-03-15 08:07:37.670 ERROR web_protocol] Error handling request
sawtooth-rest-api-default | Traceback (most recent call last):
...
```
The unmodified `POST` request from the browser gets the following response headers from the validator:
```
HTTP/1.1 405 Method Not Allowed
Content-Type: text/plain; charset=utf-8
Allow: GET,HEAD,POST
Content-Length: 23
Date: Thu, 15 Mar 2018 08:42:01 GMT
Server: Python/3.5 aiohttp/2.3.2
```
So, I guess `OPTIONS` method is not handled in the validator. A `GET` request for the state goes through fine when the CORS headers are added. This issue was also not faced in Sawtooth v0.8.
I am using docker to start the validator, and the commands to start it are a slightly modified version of those given in the LinuxFoundationX: LFS171x course. The relevant commands are below:
```
bash -c \"\
sawadm keygen && \
sawtooth keygen my_key && \
sawset genesis -k /root/.sawtooth/keys/my_key.priv && \
sawadm genesis config-genesis.batch && \
sawtooth-validator -vv \
--endpoint tcp://validator:8800 \
--bind component:tcp://eth0:4004 \
--bind network:tcp://eth0:8800
```
Can someone please guide me as to how to solve this problem?<issue_comment>username_1: CORS issues are always the best.
### What is CORS?
Your browser trying to protect users from bring directed to a page they *think* is the frontend for an API, but is actually fraudulent. Anytime a web page tries to access an API on a different domain, that API will need to explicitly give the webpage permission, or the browser will block the request. This is why you can query the API from Node.js (no browser), and can put the REST API address directly into your address bar (same domain). However, trying to go from `localhost:3000` to `localhost:8008` or from `file://path/to/your/index.html` to `localhost:8008` is going to get blocked.
Why doesn't the Sawtooth REST API handle OPTIONS requests?
----------------------------------------------------------
The Sawtooth REST API does not know the domain you are going to run your web page from, so it can't whitelist it explicitly. It is possible to whitelist *all* domains, but this obviously destroys any protection CORS might give you. Rather than try to weigh the costs and benefits of this approach for all Sawtooth users everywhere, the decision was made to make the REST API as lightweight and security agnostic as possible. Any developer using it would be expected to put it behind a proxy server, and they can make whatever security decisions they need on that proxy layer.
So how do you fix it?
---------------------
You need to setup a proxy server that will put the REST API and your web page on the same domain. There is no quick configuration option for this. You will have to set up an actual server. Obviously there are lots of ways to do this. If you are already familiar with Node, you could serve the page from Node.js, and then have the Node server proxy the API calls. If you are already running all of the Sawtooth components with `docker-compose` though, it might be easier to use Docker and Apache.
Setting up an Apache Proxy with Docker
--------------------------------------
### Create your Dockerfile
In the same directory as your web app create a text file called "Dockerfile" (no extension). Then make it look like this:
```
FROM httpd:2.4
RUN echo "\
LoadModule proxy_module modules/mod_proxy.so\n\
LoadModule proxy_http_module modules/mod_proxy_http.so\n\
ProxyPass /api http://rest-api:8008\n\
ProxyPassReverse /api http://rest-api:8008\n\
RequestHeader set X-Forwarded-Path \"/api\"\n\
" >>/usr/local/apache2/conf/httpd.conf
```
This is going to do a couple of things. First it will pull down the `httpd` module from DockerHub, which is just a simple static server. Then we are using a bit of bash to add five lines to Apache's configuration file. These five lines import the proxy modules, tell Apache that we want to proxy `http://rest-api:8008` to the `/api` route, and set the `X-Forwarded-Path` header so the REST API can properly build response URLs. Make sure that `rest-api` matches the actual name of the Sawtooth REST API service in your docker compose file.
### Modify your docker compose file
Now, to the docker compose YAML file you are running Sawtooth through, you want to add a new property under the `services` key:
```
services:
my-web-page:
build: ./path/to/web/dir/
image: my-web-page
container_name: my-web-page
volumes:
- ./path/to/web/dir/public/:/usr/local/apache2/htdocs/
expose:
- 80
ports:
- '8000:80'
depends_on:
- rest-api
```
This will build your Dockerfile located at `./path/to/web/dir/Dockerfile` (relative to the docker compose file), and run it with its default command, which is to start up Apache. Apache will serve whatever files are located in `/usr/local/apache2/htdocs/`, so we'll use `volumes` to link the path to your web files on your host machine (i.e. `./path/to/web/dir/public/`), to that directory in the container. This is basically an alias, so if you update your web app later, you don't need to restart this docker container to see the changes. Finally, `ports` will take the server, which is at port `80` inside the container, and forward it out to `localhost:8000`.
### Running it all
Now you should be able to run:
```
docker-compose -f path/to/your/compose-file.yaml up
```
And it will start up your Apache server along with the Sawtooth REST API and validator and any other services you defined. If you go to `http://localhost:8000`, you should see your web page, and if you go to `http://localhost:8000/api/blocks`, you should see a JSON representation of the blocks on chain. More importantly you should be able to make the request from your web app:
```
request.post({
url: 'api/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => console.log(response) );
```
---
*Whew*. Sorry for the long response, but I'm not sure if it is possible to solve CORS any faster. Hopefully this helps.
Upvotes: 4 [selected_answer]<issue_comment>username_2: The transaction Header should have details like, address of the block where it would be save. Here is example which I have used and is working fine for me :
String payload = "create,0001,BLockchain CPU,Black,5000";
```
logger.info("Sending payload as - "+ payload);
String payloadBytes = Utils.hash512(payload.getBytes()); // --fix for invaluid payload seriqalization
ByteString payloadByteString = ByteString.copyFrom(payload.getBytes());
String address = getAddress(IDEM, ITEM_ID); // get unique address for input and output
logger.info("Sending address as - "+ address);
TransactionHeader txnHeader = TransactionHeader.newBuilder().clearBatcherPublicKey()
.setBatcherPublicKey(publicKeyHex)
.setFamilyName(IDEM) // Idem Family
.setFamilyVersion(VER)
.addInputs(address)
.setNonce("1")
.addOutputs(address)
.setPayloadSha512(payloadBytes)
.setSignerPublicKey(publicKeyHex)
.build();
ByteString txnHeaderBytes = txnHeader.toByteString();
byte[] txnHeaderSignature = privateKey.signMessage(txnHeaderBytes.toString()).getBytes();
String value = Signing.sign(privateKey, txnHeader.toByteArray());
Transaction txn = Transaction.newBuilder().setHeader(txnHeaderBytes).setPayload(payloadByteString)
.setHeaderSignature(value).build();
BatchHeader batchHeader = BatchHeader.newBuilder().clearSignerPublicKey().setSignerPublicKey(publicKeyHex)
.addTransactionIds(txn.getHeaderSignature()).build();
ByteString batchHeaderBytes = batchHeader.toByteString();
byte[] batchHeaderSignature = privateKey.signMessage(batchHeaderBytes.toString()).getBytes();
String value_batch = Signing.sign(privateKey, batchHeader.toByteArray());
Batch batch = Batch.newBuilder()
.setHeader(batchHeaderBytes)
.setHeaderSignature(value_batch)
.setTrace(true)
.addTransactions(txn)
.build();
BatchList batchList = BatchList.newBuilder()
.addBatches(batch)
.build();
ByteString batchBytes = batchList.toByteString();
String serverResponse = Unirest.post("http://localhost:8008/batches")
.header("Content-Type", "application/octet-stream")
.body(batchBytes.toByteArray())
.asString()
.getBody();
```
Upvotes: 0
|
2018/03/15
| 1,138 | 3,586 |
<issue_start>username_0: I'm having a lot of trouble with a question we have been given in class Re: Conditional Statements.
I'm struggling to comprehend how I can have two variables (`"year"` and `"s"`) being able to go into one `{}`.
I must display a person's age in years (having taken their year of birth):
>
> Work out which year the user was born in and display the appropriate
> message from below
>
>
>
> ```
> , it is now , you are year old and so you were born in .
>
> , it is now , you are years old and so you were born in .
>
> ```
>
> The message used above must be grammatically correct for the age.
>
>
> * is the users first name.
> * is the current year
> * is the age in years of the user.
> * is the year the user was born in.
>
>
>
and some examples they give:
>
>
> ```
> Please enter your first name: Jane
> Hi Jane, please enter your age in years: 1
> Jane, it is now 2018, you are 1 year old and so you were born in 2017.
>
> ```
>
>
and
>
>
> ```
> Please enter your first name: Elaine
> Hi Elaine, please enter your age in years: 22
> Elaine, it is now 2018, you are 22 years old and so you were born in 1996.
>
> ```
>
>
I must use the following strings:
>
>
> ```
> "{}, it is now {}, you are {} {} old and so you were born in {}."
> "year"
> "s"
>
> ```
>
><issue_comment>username_1: Your post contains a lot of information and implicit questions, so I will constrain my answer to the explicit one:
>
> I'm struggling to comprehend how I can have two variables ("year" and "s") being able to go into one {}.
>
>
>
The string that is going to contain either `1 year` or `[more than one] years` is this one: `"{}, it is now {}, you are {} {} old and so you were born in {}."`, and it conveniently contains two format placeholders for the number and "year" or "years" each. Picking a minimal example, this might give you an idea how you can solve your problem:
```
>>> s = "you are {} {} old"
... options = [0, 1, 2, 3]
... for op in options:
... year_label = "year"
... if op != 1:
... year_label += "s"
... print(s.format(op, year_label))
```
Prints:
```
you are 0 years old
you are 1 year old
you are 2 years old
you are 3 years old
```
There might be more descriptive ways to write this, and I don't know if it will sound right for every possible input. But the important point is that not every placeholder in your output maps to one in- or out-put variable; in this case you need to derive one value based on the input.
Upvotes: 1 <issue_comment>username_2: Based on the placeholders that you need, here is some code:
```
year = 2018
name = input("Please enter your first name: ")
age = input("Hi " + name.title() + " , please enter your age in years: ")
age = int(age)
age_minus_year = (year - age)
if age != 1:
print("{}, it is now {}, you are {}{} old and so you were born in {}.".format(name.title(), year, age, ' years', str(age_minus_year)))
else:
print("{}, it is now {}, you are {}{} old and so you were born in {}.".format(name.title(), year, age, ' year', str(age_minus_year)))
```
Here is your output:
```
Please enter your first name: jane
Hi jane, please enter your age in years: 1
Jane, it is now 2018, you are 1 years old and so you were born in 2017.
```
Based on what you mentioned your code required, this code should give you the format that you need. You simply need to use the .format method here. The if statement will simply determine if the print statement should state years or year depending on what the user enters.
Upvotes: 0
|
2018/03/15
| 1,318 | 4,509 |
<issue_start>username_0: I'm looking for a way to extract the git commit massage directly from the committed source file without invoking the editor or similar.
Our department has just started working with git, and for legacy reasons the changes made are written at the top of the source file:
```
#!/usr/local/bin/php
php
//
// @(#)insert_AX_serialno.php Arrow/ECS/EMEA/EA/nb 1.6 2018-03-14, 13:41:20 CET
//
// V 1.6: Now also check the Warehouse Code of an item before inserting the serial
// 2018-03-07 number. The Warehouse Code must not be equal to any of three values (see below).
//
// V 1.5: Now also check the Storage Dimensiaon of an item before inserting the serial
// 2018-03-07 number. The Storage Dimension must be equal to the constant "PHYSICAL".
//
// V 1.4: introduced an "Environment" variable which determines the target of the GetAXPO...
// 2018-02-21 functions (DEV, UAT or PROD). The variable can either be set explicitly or gets
// its value from the $_ENV array.
//
// V 1.3: stop processing if a line does not have the necessary Approval Status and
// 2018-02-20 PO Status
//
// V 1.2: Every insert requires now a RECID; either for the line or for the header.
// 2017-12-20 So we're selecting the RECID from the AX table if it's not provided as
</code
```
Now I would like to take the commit message directly from the source code instead of typing it again, e.g. the the commit message should (in this example) read as "V 1.6 - 2018-03-07
Now also check the Warehouse Code of an item before inserting the serial number. The Warehouse Code must not be equal to any of three values (see below)."
I'm new to git, and all I could excerpt from the githooks man page was that I can *prepare* the message with a hook, but not *replace* it.
My idea is that I can commit a file with `git commit` and git fetches the relevant message from the source file ...
The question is:
1) Does a hook know which file(s) is/are being committed? If yes, is it a parameter to the hook or an environment variable?
2) Can a hook prepare a message file out of the source file and make git use that file instead of opening the editor (of course without using the "-m" parameter)?<issue_comment>username_1: Your post contains a lot of information and implicit questions, so I will constrain my answer to the explicit one:
>
> I'm struggling to comprehend how I can have two variables ("year" and "s") being able to go into one {}.
>
>
>
The string that is going to contain either `1 year` or `[more than one] years` is this one: `"{}, it is now {}, you are {} {} old and so you were born in {}."`, and it conveniently contains two format placeholders for the number and "year" or "years" each. Picking a minimal example, this might give you an idea how you can solve your problem:
```
>>> s = "you are {} {} old"
... options = [0, 1, 2, 3]
... for op in options:
... year_label = "year"
... if op != 1:
... year_label += "s"
... print(s.format(op, year_label))
```
Prints:
```
you are 0 years old
you are 1 year old
you are 2 years old
you are 3 years old
```
There might be more descriptive ways to write this, and I don't know if it will sound right for every possible input. But the important point is that not every placeholder in your output maps to one in- or out-put variable; in this case you need to derive one value based on the input.
Upvotes: 1 <issue_comment>username_2: Based on the placeholders that you need, here is some code:
```
year = 2018
name = input("Please enter your first name: ")
age = input("Hi " + name.title() + " , please enter your age in years: ")
age = int(age)
age_minus_year = (year - age)
if age != 1:
print("{}, it is now {}, you are {}{} old and so you were born in {}.".format(name.title(), year, age, ' years', str(age_minus_year)))
else:
print("{}, it is now {}, you are {}{} old and so you were born in {}.".format(name.title(), year, age, ' year', str(age_minus_year)))
```
Here is your output:
```
Please enter your first name: jane
Hi jane, please enter your age in years: 1
Jane, it is now 2018, you are 1 years old and so you were born in 2017.
```
Based on what you mentioned your code required, this code should give you the format that you need. You simply need to use the .format method here. The if statement will simply determine if the print statement should state years or year depending on what the user enters.
Upvotes: 0
|
2018/03/15
| 189 | 666 |
<issue_start>username_0: I want to encircle react-native-vector icons. I have added a border radius in the style but it not helpful for all devices and also with every icon it behaves different.
```
```
Link to react native vector icons:
<https://oblador.github.io/react-native-vector-icons/>
[](https://i.stack.imgur.com/JUaga.png)<issue_comment>username_1: Try to wrap it inside a `View` as a container
```
```
Change the width and height to your own preference of course.
Upvotes: 2 <issue_comment>username_2: Try adding the `overflow:"hidden"` option to your style
```
```
Upvotes: 3
|
2018/03/15
| 618 | 2,070 |
<issue_start>username_0: I have 2 tables:
Table called "Join\_of\_LIN\_and\_LI2" having 2 columns as follows:
```
Fcode | Description
103 |
4301 |
5200 |
```
and the second table called "Diam"having 2 columns as follows:
```
Fcode | Description
5200 | S Force Line
```
I want to fill the field "Description" of the table "Join\_of\_LIN\_and\_LI2" with the values of the field "Description" of the table "Diam"
when the field "Fcode" of table "Join\_of\_LIN\_and\_LI2" is equal to the field "Fcode" of table "Diam".
What SQL statement I should use?
I tried the below but doesn't work
```
UPDATE Join_of_LIN_and_LI2
SET Description = (SELECT Diam.Description
FROM Join_of_LIN_and_LI2
LEFT JOIN Diam ON Join_of_LIN_and_LI2.Fcode = Diam.Fcode;)
```
It gives an error "Operation must use an updateable query" although I am sure that the user has the permissions to write on that file.
Thanks<issue_comment>username_1: You don't need to use a subquery for this. You can just use a join in the `UPDATE` query:
```
UPDATE Join_of_LIN_and_LI2
INNER JOIN Diam
ON Join_of_LIN_and_LI2.Fcode = Diam.Fcode
SET Join_of_LIN_and_LI2.Description = Diam.Description;
```
The *Operation must use an updateable query* refers to the fact that the query is not updateable. That can happen for many reasons, but in this case, it's because you're using a subquery.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use a subquery, but it should be a *correlated* subquery:
```
UPDATE Join_of_LIN_and_LI2
SET Description = (SELECT Diam.Description
FROM Diam
WHERE Join_of_LIN_and_LI2.Fcode = Diam.Fcode
);
```
Note: This is different from the `JOIN` in two ways. First, this will set `Description` to `NULL` when there are no matches. The `JOIN` version will not change the existing value. Second, if there are multiple matches in `Diam`, then this returns a "subquery returns more than one row" type of error.
Upvotes: 1
|
2018/03/15
| 394 | 1,345 |
<issue_start>username_0: I have the following bat file:
```
echo on
copy "camera uploads\*.pdf" "C:\Users\user\OneDrive\תיקיה 2018"
pause
```
it fails due to "file system cannot find the path specified"
what do I miss?
Thanks!<issue_comment>username_1: You don't need to use a subquery for this. You can just use a join in the `UPDATE` query:
```
UPDATE Join_of_LIN_and_LI2
INNER JOIN Diam
ON Join_of_LIN_and_LI2.Fcode = Diam.Fcode
SET Join_of_LIN_and_LI2.Description = Diam.Description;
```
The *Operation must use an updateable query* refers to the fact that the query is not updateable. That can happen for many reasons, but in this case, it's because you're using a subquery.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use a subquery, but it should be a *correlated* subquery:
```
UPDATE Join_of_LIN_and_LI2
SET Description = (SELECT Diam.Description
FROM Diam
WHERE Join_of_LIN_and_LI2.Fcode = Diam.Fcode
);
```
Note: This is different from the `JOIN` in two ways. First, this will set `Description` to `NULL` when there are no matches. The `JOIN` version will not change the existing value. Second, if there are multiple matches in `Diam`, then this returns a "subquery returns more than one row" type of error.
Upvotes: 1
|
2018/03/15
| 1,543 | 6,609 |
<issue_start>username_0: I have applied the passport auth in laravel. I have done this on my local machine and AWS server too. Now I am trying to apply the same to a shared hosting, where I won't be able to access the terminal. So my curiosity is just to know is it possible to apply for the passport without `php artisan passport: install`?<issue_comment>username_1: Try to make a controller with a connected HTTP route, and put
```
Artisan::call('passport:install');
```
there. Then go to the url to run the command.
Upvotes: -1 [selected_answer]<issue_comment>username_2: Put the command in your composer.json post install script:
```
"post-install-cmd": [
"php artisan passport:install"
]
```
There are many different events you can hook into other than post install as well. [Composer events](https://getcomposer.org/doc/articles/scripts.md#installer-events)
Upvotes: 0 <issue_comment>username_3: Usually you would use the following code in your controller to excute an Artisan call:
```php
Artisan::call('passport:install');
```
However, this doesn't work on passport:install and you will get the error:
>
> There are no commands defined in the "passport" namespace
>
>
>
To fix this you must add the following code to boot method at AppServiceProvider.php :
```
php
namespace App\Providers;
use Laravel\Passport\Console\ClientCommand;
use Laravel\Passport\Console\InstallCommand;
use Laravel\Passport\Console\KeysCommand;
use Laravel\Passport\Passport;
use Illuminate\Support\Facades\Schema;
use Illuminate\Support\ServiceProvider;
class AppServiceProvider extends ServiceProvider
{
/**
* Bootstrap any application services.
*
* @return void
*/
public function boot()
{
Schema::defaultStringLength(191);
Passport::routes();
/*ADD THIS LINES*/
$this-commands([
InstallCommand::class,
ClientCommand::class,
KeysCommand::class,
]);
}
```
Upvotes: 4 <issue_comment>username_4: This code works with no error
```
shell_exec('php ../artisan passport:install');
```
Upvotes: 3 <issue_comment>username_5: if you want to run commands in controller this is the complete code that you need, two commands will not run with php artisan and you should run it with shell
```
public function getCommand($command)
{
echo '
php artisan ' . $command . ' is running...';
$output = new BufferedOutput;
if(strpos($command, 'api') === false && strpos($command, 'passport') === false){
Artisan::call($command, [], $output);
}else{
shell_exec('php ../artisan ' . $command);
dump('php ../artisan ' . $command);
}
dump($output->fetch());
echo 'php artisan ' . $command . ' completed.';
echo '
[Go back](/admin/setting/advance)';
}
```
This is list of all commands
```
$commands = [
[
'id' => 1,
'description' => 'recompile classes',
'command' => 'clear-compiled',
],
[
'id' => 2,
'description' => 'recompile packages',
'command' => 'package:discover',
],
[
'id' => 3,
'description' => 'run backup',
'command' => 'backup:run',
],
[
'id' => 4,
'description' => 'create password for passport',
'command' => 'passport:client --password',
],
[
'id' => 5,
'description' => 'install passport',
'command' => 'passport:install',
],
[
'id' => 6,
'description' => 'create a document for api',
'command' => 'apidoc:generate',
],
[
'id' => 7,
'description' => 'show list of routes',
'command' => 'route:list',
],
[
'id' => 8,
'description' => 'recompile config cache',
'command' => 'config:cache',
],
[
'id' => 9,
'description' => 'clear config cache',
'command' => 'config:clear',
],
[
'id' => 10,
'description' => 'run lastest migrations',
'command' => 'migrate',
],
[
'id' => 11,
'description' => 'run seeders',
'command' => 'db:seed',
],
[
'id' => 12,
'description' => 'recompile route cache',
'command' => 'route:cache',
],
[
'id' => 13,
'description' => 'clear route cache',
'command' => 'route:clear',
],
[
'id' => 14,
'description' => 'recompile view cache',
'command' => 'view:cache',
],
[
'id' => 15,
'description' => 'clear view cache',
'command' => 'view:clear',
],
[
'id' => 16,
'description' => 'optimize all configurations',
'command' => 'optimize',
],
];
```
Upvotes: 0 <issue_comment>username_6: Your passport migrations wont run when not calling a command from the console as passport only registers its commands when the application is running in console mode.
To get around this, we need to register the migrations and commands.
Do the following in your `AuthServiceProvider`:
```php
php
namespace App\Providers;
use Illuminate\Foundation\Support\Providers\AuthServiceProvider as ServiceProvider;
use Laravel\Passport\Console\ClientCommand;
use Laravel\Passport\Console\InstallCommand;
use Laravel\Passport\Console\KeysCommand;
use Laravel\Passport\Passport;
class AuthServiceProvider extends ServiceProvider
{
/**
* @var array
*/
protected $policies = [
// 'App\Model' = 'App\Policies\ModelPolicy',
];
public function boot(): void
{
Passport::routes();
if ($this->app->environment !== 'production') {
$this->loadMigrationsFrom(base_path('vendor/laravel/passport/database/migrations'));
$this->commands([
InstallCommand::class,
ClientCommand::class,
KeysCommand::class,
]);
}
}
}
```
Upvotes: 2
|
2018/03/15
| 1,254 | 4,905 |
<issue_start>username_0: I am trying to setup PnP Partner pack on azure following this [youtube video](https://www.youtube.com/watch?v=ezWYorZClTI&t=1796s). As my trial azure storage subscription has expired, I am trying to use Azure storage of my company, but their Azure account is not connected to an office 365 tenant. So, I created a trial office 365 account and now I am trying to connect office 365 to Azure storage.
These are not under the same account. Can someone help me set this up?
**Edit 1**
suppose you have Azure Tenant called <EMAIL>. This account is not coupled to an office 365 tenant. It, however, has a valid storage subscription.
So, I deploy my app in the app service in my Azure tenant. This app has to communicate with SharePoint online. But as this azure tenant does not have an office 365 tenant coupled to it, I thought of creating a trial office 365 tenant, for example: <EMAIL>. Now, the question is, how can I configure Azure tenant (storage) to communicate with office 365, that is another tenant account?
**Edit 2**
OK, I have deployed PnP Partner Pack to my company's azure storage account. How does it work. The application is an MVC application. Before I deploy it I have to do the following:
1. Create a storage account
2. Create a web app in this storage account
3. Register the web app in AAD
4. Assign the following permissions to the web app: SharePOint, Graph
5. Insert the application ID and secret key in the web config of the solution
6. Assign URL that needs to access the sharepoint Site collection online inside web.config
7. Deploy the solution to Azure Web Application
Once it is deployed then I can open the web application which now has access to SharePoint online.
The problem?
As long as the Office 365 and Azure Tenant account are the same there is no problem. But now that I don't have anymore the same account for Azure Tenant and office 365, I cannot access sharepoint from my Azure web application. I don't know how to set up the application registration in Azure AD so that it can access sharepoint in another office 365 tenant.
Eg.: Azure Tenant name "<EMAIL>" needs to access SharePoint, Graph and AAD in office 365 which has the following tenant account "<EMAIL>".
How can I set it up so from my web application in Azure Web application "<EMAIL>") I can access the following SharePoint, Graph and AAD in another office 365 tenant account ("<EMAIL>")?
Edit 3
Web app that lives in Azure "<EMAIL>" account needs to access **users** (AAD), SharePoint and Graph of the other office 365 account, i.e: "<EMAIL>".
Hope it is clear.<issue_comment>username_1: Updated answer:
===============
---
Well, You finally make clear about your scenario.
First, **if your subscription trail is expired, you shouldn't use it to run anything in that subscription any more**. You'd better contact Azure support to move resources to another new subscription or backup your data to your local machine.
Second, About how to access APIs in other AAD tenant:
If your subscription is not expired, you can achieve that. Actually,there's noting about where your MVC application is. It can be in any tenant, but not with a expired subscription. The only difference is AAD tenant is changed.
It should use client credentials flow. And what you need to change is :
1. Register AAD application in your company AD tenant
2. change the AAD endpoint in your `web.config` to `https://login.microsoftonline.com/somecompany.onmicrosoft.com/oauth2/v2.0/token`
Then you can use the token and access to the resources. Completing these configuration may need Global admin permissions to do admin consent, so you'd better be the admin of your company tenant.
I'm not 100% sure this whether can be configured in your web.config file. There is [a document](https://learn.microsoft.com/en-us/azure/active-directory/develop/active-directory-protocols-oauth-service-to-service) may be helpful to you to understand this authentication flow.
Hope this helps!
Upvotes: -1 <issue_comment>username_2: >
> but this account does not have an office 365 tenant associated to it.
>
>
>
You could refer this [article](https://social.technet.microsoft.com/wiki/contents/articles/24728.using-an-office-365-tenant-in-the-azure-management-portal.aspx).
>
> But what if my subscription has already been created with a Microsoft account not associated with new Office 365 directory, or has an organizational account associated with another Office 365 directory that I want to associate with this Office 365 directory?
>
>
> In this walkthrough, we will:
>
>
>
And you know the trial version has time and function limits. For further development, I suggest you could buy an office 365 account to test more features.
Upvotes: 0
|
2018/03/15
| 302 | 1,079 |
<issue_start>username_0: [](https://i.stack.imgur.com/SEdL6.png)
I want to change all these 'int' to 'bool',
in VSCode ,the shortcut is `CTRL`+`D`,
What is the shortcut in VisualStudio 2017?<issue_comment>username_1: cmd + shift + L when selecting the element of which you want select all ocurrences
Upvotes: 0 <issue_comment>username_2: Search and replace in a given file `CTRL + H`
Search and replace in all files for a solution `CTRL + SHIFT + H`
Not entirely sure this is what you need, but this should be able to help you :)
Upvotes: 1 <issue_comment>username_3: I think what you want is `CTRL`+`R` , `CTRL`+`R` ? What would suit best the case if refactor-renaming, making all the usage of your selection change with it too, so you don't need to change it all by hand or overwrite content which is spelled identically but which should have not been targeted. If ever in need of a binding, keep this [Visual Studio Binding Cheat Sheet](http://visualstudioshortcuts.com/2017/)
Upvotes: 2 [selected_answer]
|
2018/03/15
| 660 | 1,944 |
<issue_start>username_0: I would like to write a function that accepts a dictionary of legend parameters before outputting a plot. I've included a small example below.
**Imports**
```
import numpy as np
import matplotlib.pyplot as plt
```
**Data**
```
x = np.linspace(0, 100, 501)
y = np.sin(x)
```
**Legend Parameters**
```
legend_dict = dict(ncol=1, loc='best', fancybox=True, shadow=True)
label = 'xy data sample'
# label = None
```
**Plot**
```
if label is not None:
plt.plot(x, y, label=label, **legend_dict)
else:
plt.plot(x, y)
plt.show()
```
This gives me the following error (which can be avoided by uncommenting `label=None`).
```
plt.plot(x, y, label=label, **legend_dict) # this line
AttributeError: Unknown property shadow # this error
```
Why doesn't this approach work?<issue_comment>username_1: You should specify the properties of the legend in a call to [`plt.legend()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html), not in [`plt.plot()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html):
```
x = np.linspace(0, 100, 501)
y = np.sin(x)
legend_dict = dict(ncol=1, loc='best', fancybox=True, shadow=True)
label = 'xy data sample'
plt.plot(x, y, label=label)
plt.legend(**legend_dict)
plt.show()
```
Which gives:
[](https://i.stack.imgur.com/ZBata.png)
Upvotes: 2 <issue_comment>username_2: You are trying to pass the legend kwargs to the plot function. Need to call `.legend()` seperately.
```
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 100, 501)
y = np.sin(x)
legend_dict = dict(ncol=1, loc='best', fancybox=True, shadow=True)
label = 'xy data sample'
#label = None
plt.plot(x, y, label=label)
plt.legend(**legend_dict)
plt.show()
```
Note also do not need the if statement - label being None is fine as that is the default!
Upvotes: 4 [selected_answer]
|
2018/03/15
| 589 | 2,208 |
<issue_start>username_0: I have a variable named `inScreenshotmode` in AppDelegate:
```
#if DEBUG
var inScreenshotMode: Bool {
return UserDefaults.standard.bool(forKey: "abc")
}
#else // Release
let inScreenshotMode = false
#endif
```
So how can I optimise the below code?
```
let totalValue = appDelegate?.inScreenshotMode == true ? basicInfo.value : configuration.value
```
If I do
```
let totalValue = appDelegate?.inScreenshotMode ? basicInfo.value : configuration.value
```
I getting error:
>
> **Value of optional type 'Bool?' not unwrapped; did you mean to use '!' or '?'? Replace 'appDelegate?.inScreenshotMode' with
> '(appDelegate?.inScreenshotMode)!'**
>
>
>
What is the best solution?<issue_comment>username_1: The problem is that `appDelegate?.inScreenshotMode` is indeed an optional which means that it can return `nil`. Since `nil` is **Equatable**, the part `appDelegate?.inScreenshotMode == true` will always return either `true` or `false`. But using `appDelegate?.inScreenshotMode` on its own can return `true`, `false`, or `nil`.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Based on your declaration for `appDelegate` it seems that you are declaring it as an optional (not sure what's the reason of this), what are you facing is called *[Optional Chaining](https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/OptionalChaining.html)*:
>
> Optional chaining is a process for querying and calling properties,
> methods, and subscripts on an optional that might currently be nil. If
> the optional contains a value, the property, method, or subscript call
> succeeds; if the optional is nil, the property, method, or subscript
> call returns nil. Multiple queries can be chained together, and the
> entire chain fails gracefully if any link in the chain is nil.
>
>
>
Which means that you have to make sure that `appDelegate` is not `nil` instead of `(appDelegate?.inScreenshotMode)!`, I would recommend to do optional binding:
```
if let unwrappedAppDelegate = appDelegate {
let totalValue = unwrappedAppDelegate.inScreenshotMode ? basicInfo.value : configuration.value
}
```
Upvotes: 2
|
2018/03/15
| 389 | 1,312 |
<issue_start>username_0: I have this basic script:
HTML:
```
Button
```
TS:
```
buttonclick()
{
this.test = true;
}
```
**What I expected to happen:**
clicking the button shows the image.
**What happens:**
clicking the button makes a small grey rectangle appear where the image should be, as if the img source couldn't be found. Leaving the view and returning to it does make the image appear.
What am I doing wrong or how can I fix this issue?
Any help is much appreciated!
**Update:**
After inspection of the source code, I notice a class `img-unloaded`is assigned to the image the first time. When I leave the view and return, the class changes to `img-loaded`, making the image appear. I guess this is an Ionic thing...? How to avoid this behaviour? Using the `img` tag instead of `ion-img` resolves the issue, but I'd rather want to use the `ion-img` tag.<issue_comment>username_1: Try using img tag.
```

```
Upvotes: 0 <issue_comment>username_2: You should use `![]()`, as the [doc](https://ionicframework.com/docs/api/components/img/Img/#img) states:
>
> Note: ion-img is only meant to be used inside of virtual-scroll
>
>
>
Here is an [example](https://stackblitz.com/edit/ionic-tdpdth?file=pages/home/home.html) that works
Upvotes: 3 [selected_answer]
|
2018/03/15
| 1,339 | 3,448 |
<issue_start>username_0: ```
#include
using namespace std;
int main()
{
cout << -0.152454345 << " " << -0.7545 << endl;
cout << 0.15243 << " " << 0.9154878774 << endl;
}
```
Outputs:
```
-0.152454 -0.7545
0.15243 0.915488
```
I want the output to look like this:
```
-0.152454 -0.754500
0.152430 0.915488
```
My solution:
```
#include
#include
using namespace std;
int main()
{
cout << fixed << setprecision(6) << setw(9) << setfill(' ') << -0.152454345 << " ";
cout << fixed << setprecision(6) << setw(9) << setfill(' ') << -0.7545 << endl;
cout << fixed << setprecision(6) << setw(9) << setfill(' ') << 0.15243 << " ";
cout << fixed << setprecision(6) << setw(9) << setfill(' ') << 0.9154878774 << endl;
}
```
The output is good, but the code is terrible. What can be done?
Here is my code <https://ideone.com/6MKd31><issue_comment>username_1: Specifying an output format is always terrible. Anyway you can just omit to repeat stream modifiers that are conserved across input/output and repeat only those that are transients (`setw`):
```
// change state of the stream
cout << fixed << setprecision(6) << setfill(' ');
// output data
cout << setw(9) << -0.152454345 << " ";
cout << setw(9) << -0.7545 << endl;
cout << setw(9) << 0.15243 << " ";
cout << setw(9) << 0.9154878774 << endl;
```
Upvotes: 3 <issue_comment>username_2: This might not quite be what you want, but I thought I'd throw it out there anyway as it's very simple.
If you can tolerate a leading `+` for non-negative numbers then you can use
```
std::cout << std::showpos << /*ToDo - the rest of your output here*/
```
At least then everything lines up, with minimal effort.
Upvotes: 2 <issue_comment>username_3: Had a similar problem with streams once (but more complex, though), you could use a separate formatter object to avoid code repeating:
```
class F
{
double v;
public:
F(double v) : v(v) { };
friend ostream& operator<<(ostream& s, F f)
{
s << setw(9) << v;
}
};
```
Usage:
```
std::cout << fixed << setprecision(6) << setfill(' '); // retained, see Jean's answer
std::cout << F(12.10) << ' ' << F(-10.12) << std::endl;
```
Depending on your needs and how frequently you use it, it might be overkill or not - decide yourself...
Upvotes: 2 <issue_comment>username_4: You can write a function for it:
```
std::ostream& format_out(std::ostream& os, int precision, int width, char fill, double value)
{
return os << std::fixed << std::setprecision(precision) << std::setw(width) << std::setfill(fill)
<< value;
}
int main()
{
format_out(std::cout, 6, 9, ' ', -0.152454345) << '\n';
format_out(std::cout, 6, 9, ' ', -0.7545) << '\n';
format_out(std::cout, 6, 9, ' ', 0.15243) << '\n';
format_out(std::cout, 6, 9, ' ', 0.9154878774) << '\n';
}
```
Upvotes: 2 <issue_comment>username_5: A bit of necroposting, but it might still be useful to someone in the future. We can use a lamda expression (or the equivalent function) to make the formatting less painful.
```
#include
#include
int main()
{
using namespace std;
auto format = [](std::ostream &os) -> std::ostream& {
return os << fixed << setprecision(6) << setw(9) << setfill(' ');
};
cout << format << -0.152454345 << " " << format << -0.7545 << "\n";
cout << format << 0.15243 << " " << format << 0.9154878774 << endl;
}
```
Output:
```
-0.152454 -0.754500
0.152430 0.915488
```
Upvotes: 2
|
2018/03/15
| 1,047 | 2,983 |
<issue_start>username_0: I add the code:
```
JUnitSampler junitSampler=new JUnitSampler();
String UserId=junitSampler.getThreadContext().getVariables().get("username");
```
to Junit code I see an error red squiggly lines in Eclipse at `import org.apache.jmeter.protocol.java.sampler.JUnitSampler;`
How to clear the following error:
<issue_comment>username_1: Specifying an output format is always terrible. Anyway you can just omit to repeat stream modifiers that are conserved across input/output and repeat only those that are transients (`setw`):
```
// change state of the stream
cout << fixed << setprecision(6) << setfill(' ');
// output data
cout << setw(9) << -0.152454345 << " ";
cout << setw(9) << -0.7545 << endl;
cout << setw(9) << 0.15243 << " ";
cout << setw(9) << 0.9154878774 << endl;
```
Upvotes: 3 <issue_comment>username_2: This might not quite be what you want, but I thought I'd throw it out there anyway as it's very simple.
If you can tolerate a leading `+` for non-negative numbers then you can use
```
std::cout << std::showpos << /*ToDo - the rest of your output here*/
```
At least then everything lines up, with minimal effort.
Upvotes: 2 <issue_comment>username_3: Had a similar problem with streams once (but more complex, though), you could use a separate formatter object to avoid code repeating:
```
class F
{
double v;
public:
F(double v) : v(v) { };
friend ostream& operator<<(ostream& s, F f)
{
s << setw(9) << v;
}
};
```
Usage:
```
std::cout << fixed << setprecision(6) << setfill(' '); // retained, see Jean's answer
std::cout << F(12.10) << ' ' << F(-10.12) << std::endl;
```
Depending on your needs and how frequently you use it, it might be overkill or not - decide yourself...
Upvotes: 2 <issue_comment>username_4: You can write a function for it:
```
std::ostream& format_out(std::ostream& os, int precision, int width, char fill, double value)
{
return os << std::fixed << std::setprecision(precision) << std::setw(width) << std::setfill(fill)
<< value;
}
int main()
{
format_out(std::cout, 6, 9, ' ', -0.152454345) << '\n';
format_out(std::cout, 6, 9, ' ', -0.7545) << '\n';
format_out(std::cout, 6, 9, ' ', 0.15243) << '\n';
format_out(std::cout, 6, 9, ' ', 0.9154878774) << '\n';
}
```
Upvotes: 2 <issue_comment>username_5: A bit of necroposting, but it might still be useful to someone in the future. We can use a lamda expression (or the equivalent function) to make the formatting less painful.
```
#include
#include
int main()
{
using namespace std;
auto format = [](std::ostream &os) -> std::ostream& {
return os << fixed << setprecision(6) << setw(9) << setfill(' ');
};
cout << format << -0.152454345 << " " << format << -0.7545 << "\n";
cout << format << 0.15243 << " " << format << 0.9154878774 << endl;
}
```
Output:
```
-0.152454 -0.754500
0.152430 0.915488
```
Upvotes: 2
|
2018/03/15
| 353 | 1,127 |
<issue_start>username_0: I implemented a react-native app, it runs ok.
But 'Subscription is not defined' error happened when enabling 'Debug js remotely' in the simulator.
[](https://i.stack.imgur.com/7Wk9N.png)<issue_comment>username_1: Something's wrong with realm. This workaround allows you to run the debugger normally.
`/node\_modules/realm/lib/browser/index.js:150` :
```
...
const Sync = {
User,
Session,
//Subscription, <- Comment this
};
```
And in `/node\_modules/realm/lib/extensions.js:132` :
```
//Object.defineProperties(realmConstructor.Sync.User.prototype, getOwnPropertyDescriptors(userMethods.instance));
```
Upvotes: 3 <issue_comment>username_2: This is a [bug with Realm](https://github.com/realm/realm-js/issues/1711). Try using version 2.2.15 rather than 2.3.0:
`yarn add realm@2.2.15`
Upvotes: 3 [selected_answer]<issue_comment>username_3: I solved the same problem:
```
npm install --save realm@2.2.8
```
and:
```
react-native link
```
The system re-downloaded cocoa and re-compiled the project
Upvotes: -1
|
2018/03/15
| 359 | 1,391 |
<issue_start>username_0: I'm kinda new to Cordova and cross-platform programming. I've followed a tutorial on the internet but I'm stuck with installing a plugin with plugman. I'd like to use the camera on my app so I tried to run this command line:
```
plugman install --platform android --project hello --plugin cordova-plugin-camera
```
And I get this error message:
```
cordova-android version not detected (lacks script "/Users/XXX/Documents/cordova/hello/hello/cordova/version" ), continuing.
Unable to load PlatformApi from platform. Error: Cannot find module '/Users/XXX/Documents/cordova/hello/hello/cordova/Api.js'
The platform "android" does not appear to be a valid cordova platform. It is missing API.js. android not supported.
Failed to install 'cordova-plugin-camera': Error: Your android platform does not have Api.js
```
I've tried a couple of things I saw on forums but I can't find any solution.
I'm running Cordova on version 8.0.0 and have all the requirements needed
Thank you!<issue_comment>username_1: If the project you are working on is Ionic, the command should be:
```
ionic cordova plugin add cordova-plugin-camera
```
Upvotes: 0 <issue_comment>username_2: Don't use plugman to install the plugins, use Cordova CLI to install the plugin.
From your project folder, run
```
cordova plugin add cordova-plugin-camera
```
Upvotes: 3 [selected_answer]
|
2018/03/15
| 802 | 2,861 |
<issue_start>username_0: I just want to ask on how to access a variable to the single file component of vue?
When i try to access my app on the browser the vue js will result an error.
>
> Error in mounted hook: "ReferenceError: EventBus is not defined"
>
>
>
My code below.
I implement this on the app.js
```
window.Vue = require('vue');
var EventBus = new Vue();
Vue.component('transactionslist', require('./components/TransactionList.vue'));
const app = new Vue({
el: '#transaction',
methods: {
getTransactions: function () {
$.getJSON("/guard/station/transactions?search=" + this.Search, function (_transactions) {
this.Transactions = JSON.parse(JSON.stringify(_transactions));
}.bind(this));
}
},
data() {
return {
Search: '',
Transactions: [],
}
},
mounted: function () {
this.getTransactions();
}
});
```
and this is my vue component (TransactionList.vue)
```
| Transfer Code | Orgin | Destination | Date Created |
| --- | --- | --- | --- |
export default {
props: ['data'],
created() {
this.transactions = this.$children;
},
data() {
return {
transactions: [],
Transaction: []
}
},
methods: {
isSelectedRow: function (payload) {
EventBus.$emit('showSelected', payload);
this.transactions.forEach(transaction => {
transaction.isActive = (transaction.tscode == payload.tscode);
});
},
}
}
```
I develop the application using laravel and I use one of laravel feature which is the laravel Mix.<issue_comment>username_1: Just use this:
```
window.EventBus = new Vue();
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Create the event bus in it's own file:
```
// bus.js
import Vue from 'vue'
const bus = new Vue()
export default bus
```
The import it into the components you want to use it in:
```
import bus from 'bus'
export default {
...
methods: {
myMethod () {
bus.$emit('my-event', 'test')
}
}
...
}
```
Upvotes: 1 <issue_comment>username_3: You just defined the `EventBus` in the app.js file.
It is not accessible in the TransactionList.vue component as it has no reference to the `EventBus`
So export the `EventBus` in your app.js as
```
export const EventBus = new Vue();
```
And then import the `EventBus` in your components script as
```
import {EventBus} from './path/to/app.js'
export default {
}
```
See es6 [import](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import) and [export](https://www.google.co.in/url?q=https://developer.mozilla.org/en/docs/web/javascript/reference/statements/export&sa=U&ved=0ahUKEwjB6fnMgO7ZAhUBOY8KHWqcDAoQFggSMAE&usg=AOvVaw3vdMfU_sbr_YJcJPstSAUZ)
Upvotes: 2
|
2018/03/15
| 853 | 2,682 |
<issue_start>username_0: Good day,
I need to run job in **Jenkins monthly, two days before the end of the month**.
The question is similar to question: [run cron job on end of every month](https://stackoverflow.com/questions/8352036/run-cron-job-on-end-of-every-month), but accepted response is not acceptable in my case. (Because modifying production code is out of scope and people would still like to execute it outside set hours.)
I found similar issues and potential solution in generic crontab environment:
```
0 23 27-30 * * [ $(date +\%d -d "2 days") == 01 ]
```
But Jenkins does not support this sort of syntax, giving me error message:
>
> Invalid input: "0 23 27-30 \* \* [ $(date +\%d -d "2 days") == 01 ]": line 1:15: expecting EOF, found ' '
>
>
>
Any Jenkins gurus to give pointers?
**Edit**:
With help of answer below, I came up with following solution:
* Add string parameter Autorun
* Set Crontab to `H 4 26-30 * *`
* Modfied Execute script:
\_
```
if (($Autorun == 0)) || (( [ $(date +\%d -d "2 days") == 01 ] && $Autorun == 1 )); then
My_execute stuff
else
echo "Dummy run, autorun only 2 days before end of month."
exit 1
fi
```<issue_comment>username_1: Unfortunately Jenkins Build triggers do not support writing bash script inside them. There are still options though, depending on how strict your requirement of "2 days before end of month" is.
**Option 1:**
Only run my job once per month:
```
@monthly
```
**Option 2:**
Run the job on second to last day of each month:
0 23 27-30 \* \*
**Option 3**:
Run it as 3 separate jobs as in [this](https://stackoverflow.com/questions/6139189/cron-job-to-run-on-the-last-day-of-the-month)
This would then require you to add the date check at the start of the script portion of the job and exiting if it is not the second to last day of the month.
For more information you can "configure" a job and click on the blue question mark symbol which shows lots of information on how jenkins interprets that cron syntax.
Upvotes: 3 [selected_answer]<issue_comment>username_2: It's not necessary to set up three jobs. One job can have multiple cron expressions in Jenkins:
[](https://i.stack.imgur.com/p80rN.png)
The line on the bottom shows this should work as desired.
So you could use the following:
```
0 23 30 1,3,5,7,8,10,12 *
0 23 29 4,6,9,11 *
0 23 27-28 2 *
```
This still has the drawback for February that it runs twice in leap years, which is harder to avoid. If that's a problem, but it's not an issue to run one day early in leap years, you could just use `27` instead of `27-28`.
Upvotes: 2
|
2018/03/15
| 627 | 2,041 |
<issue_start>username_0: I have a problem with this script:
[](https://i.stack.imgur.com/aEvcq.jpg)
```
UPDATE Orders AS o
SET o.menuId = 14259
WHERE o.menuId = 14422
AND o.userId =
(SELECT Id FROM users where Group =
(SELECT Id FROM groups where eatGroupId = 4))
```
error: Subquery returns more than 1 row // Because there are many groups and users. Does anyone know if it's somehow possible to make this query work? Or other alternatives?<issue_comment>username_1: Unfortunately Jenkins Build triggers do not support writing bash script inside them. There are still options though, depending on how strict your requirement of "2 days before end of month" is.
**Option 1:**
Only run my job once per month:
```
@monthly
```
**Option 2:**
Run the job on second to last day of each month:
0 23 27-30 \* \*
**Option 3**:
Run it as 3 separate jobs as in [this](https://stackoverflow.com/questions/6139189/cron-job-to-run-on-the-last-day-of-the-month)
This would then require you to add the date check at the start of the script portion of the job and exiting if it is not the second to last day of the month.
For more information you can "configure" a job and click on the blue question mark symbol which shows lots of information on how jenkins interprets that cron syntax.
Upvotes: 3 [selected_answer]<issue_comment>username_2: It's not necessary to set up three jobs. One job can have multiple cron expressions in Jenkins:
[](https://i.stack.imgur.com/p80rN.png)
The line on the bottom shows this should work as desired.
So you could use the following:
```
0 23 30 1,3,5,7,8,10,12 *
0 23 29 4,6,9,11 *
0 23 27-28 2 *
```
This still has the drawback for February that it runs twice in leap years, which is harder to avoid. If that's a problem, but it's not an issue to run one day early in leap years, you could just use `27` instead of `27-28`.
Upvotes: 2
|
2018/03/15
| 646 | 2,247 |
<issue_start>username_0: Is it possible to change id in URL in Yii2 basic to something else my URL actual is
```
http://localhost:8585/yii40/wfp/web/post/view?id=368
```
I want to change it to
```
http://localhost:8585/yii40/wfp/web/post/view?post=368
```
My View is
```
public function actionView($id)
{
return $this->render('view', [
'model' => $this->findModel($id),
]);
}
```<issue_comment>username_1: It seems that you only want to change the name of `GET` parameter (id->post).
Assuming you have default app config, you need to find the view method (called actionView) in appropriate controller (PostController.php).
The method either takes $id parameter as its argument (like `public function actionView($id)`) or retrieves 'id' from $\_GET superglobal array later (like `$modelId = $_GET['id'];` or `$modelId = Yii::$app->request->get('id');`)
This is the place where you change it.
To get a better idea of Yii2 app structure and ways to handle requests please see <http://www.yiiframework.com/doc-2.0/guide-index.html#application-structure>
Upvotes: 1 <issue_comment>username_2: It is related to the link which on clicking lands on this action it could either be
* inside your `GridView`, **`/your_project_root/views/post/index.php`** file from where you are clicking to view the post detail by submitting the id.
* Or a normal link in your view somewhere
**1)** For GridView go to your action column and change the `['class' => 'yii\grid\ActionColumn']` to the following
```
[ 'class' => 'yii\grid\ActionColumn' ,
'header' => 'Actions' ,
'template'=>'{view}{update}{delete}',
'buttons'=>[
'view'=>function($url,$model){
$html = '';
return Html::a($html,["post/view",'post'=>$model->id]);
}
],
] ,
```
and change the `actionView($id)` in your `PostController` to the `actionView($post)` and replace all occourences of the `$id` with `$post` inside the action code.
**2)** if it is a normal link then you have to just change the `url` for that link like below
```
Html::a($html,["post/view",'post'=>$model->id]);
```
where `$model` should be replaced by the appropriate variable.
Upvotes: 3 [selected_answer]
|
2018/03/15
| 499 | 1,653 |
<issue_start>username_0: In our BizTalk Server, administrators have installed a proxy.
This proxy is only for a few URLs and the most URLs have to bypass it.
We set on BTSNTSvc64.exe.config the property:
```
```
How to set the bypasslist to include the most URLs that don't need a proxy and remove only a few that need a proxy?<issue_comment>username_1: So, this is not your problem and it's not a BizTalk issue. It's the Administrator's issue.
You need to ask them how to configure apps to bypass their proxy. Consider, it wouldn't be very useful if any person could just bypass it with a config file.
Meaning, they need to tell you how to exempt specific apps from whatever proxy process they're using. If they refuse, it's still *not your problem*, it's for your manager or the business owner to address.
Upvotes: 0 <issue_comment>username_2: How about adding everything to the bypass list and the remove the urls you want to use the proxy?
```
```
Upvotes: 2 <issue_comment>username_3: Add following regex in the
```
^((?!domain1)(?!domain2\.com).)*$
```
It will take all domain except defined above
```
www.contoso.com
api.contoso.com
www.domain1.com
sub.domain1.net
http2://www.domain2.com
sub.domain2.com
sub.domain2.edu
http://sub.domain2.net
sub.domain1.edu
http://sub.domain1.net
10.10.10.10
192.168.1.1
www.microsoft.com
api.google.com
```
Have created a test: <https://regexr.com/52o30>
Eg:
```
```
Here, `domain1` will be completely ignored with any TLD, but `domain2` will be ignored only when full domain name is present like `domain2.com`, (*with any preceding characters or pre-domain for both domains*).
Upvotes: 2
|
2018/03/15
| 543 | 1,818 |
<issue_start>username_0: descBox class have top value of main Box and thats OK . here i want to div with formBox Class place exactly bottom of the div with class descBox but my problem is it seems the descBox class doesn't occupy its own container and my formBox class go inside to it not the bottom of it.what is the best way to resolve?
```css
.descBox {
width: 95%;
margin: auto;
position: relative;
top: 20%;
text-align: center;
}
```
```html
### this is description
```<issue_comment>username_1: So, this is not your problem and it's not a BizTalk issue. It's the Administrator's issue.
You need to ask them how to configure apps to bypass their proxy. Consider, it wouldn't be very useful if any person could just bypass it with a config file.
Meaning, they need to tell you how to exempt specific apps from whatever proxy process they're using. If they refuse, it's still *not your problem*, it's for your manager or the business owner to address.
Upvotes: 0 <issue_comment>username_2: How about adding everything to the bypass list and the remove the urls you want to use the proxy?
```
```
Upvotes: 2 <issue_comment>username_3: Add following regex in the
```
^((?!domain1)(?!domain2\.com).)*$
```
It will take all domain except defined above
```
www.contoso.com
api.contoso.com
www.domain1.com
sub.domain1.net
http2://www.domain2.com
sub.domain2.com
sub.domain2.edu
http://sub.domain2.net
sub.domain1.edu
http://sub.domain1.net
10.10.10.10
192.168.1.1
www.microsoft.com
api.google.com
```
Have created a test: <https://regexr.com/52o30>
Eg:
```
```
Here, `domain1` will be completely ignored with any TLD, but `domain2` will be ignored only when full domain name is present like `domain2.com`, (*with any preceding characters or pre-domain for both domains*).
Upvotes: 2
|
2018/03/15
| 567 | 1,856 |
<issue_start>username_0: there are many implementation in how to check whether a String value is null and I'm really confused what to use!
These are my searches so far:
1. `if(myString == null){}`
2. `Objects.equals(null, myString);`
3. `ObjectUtils.isEmpty(myString)`
4. `myString.equals(null)`
5. `myString.compareTo(null);`
---
**NOTE:** This is all about performance, reliable coding, fail-safe and other security purposes!
**Updated:** myString is dynamic, what if it was null, some of the implementation above will throw NullPointerException!<issue_comment>username_1: Just use a simple `==`. There's no need to overcomplicate this.
Upvotes: 3 <issue_comment>username_2: Another Option if you are using Java 8 you can use [Optional::ofNullable](https://docs.oracle.com/javase/8/docs/api/java/util/Optional.html#ofNullable-T-)
```
Optional.ofNullable(myString).isPresent()// true if not null, false if null
```
You can even use :
```
Optional.ofNullable(myString)
.orElseThrow(() -> new Exception("If null throw an exception"));
```
There are many Options, just read the documentation
---
But as [username_1](https://stackoverflow.com/a/49295321/5558072) mention in his answer `==` is enough in your case.
Upvotes: 3 <issue_comment>username_3: 1. `if(myString == null)`
Easiest and right way.
2. `Objects.equals(null, myString)`
A right way, its implemention is based on 1.
3. `ObjectUtils.isEmpty(myString)`
Not sure which `ObjectUtils` you are working with, but it seems to check `myString` is empty, not null.
4. `myString.equals(null)`
This does not work when `myString` is null, NPE will be thrown.
5. `myString.compareTo(null)`
This does not work when `myString` is null, NPE will be thrown.
Upvotes: 3 <issue_comment>username_4: `
```
String results;
if(results==null ||results.length()==0) {
}
```
`
Upvotes: -1
|
2018/03/15
| 536 | 1,839 |
<issue_start>username_0: According to this pull request <https://github.com/imazen/resizer/pull/178>
you are fixed issue with AWS IAM roles(EC2 Profiles) but this changes were not merged to a master.
Are you going to release this feature. It affects security of the application and looks important to fix.<issue_comment>username_1: This was merged two years ago, and has been shipped.
Upvotes: 0 <issue_comment>username_2: Actually it was not merged into master. Could you please check this <https://github.com/imazen/resizer/blob/master/Plugins/S3Reader2/S3Reader.cs> ?
Also could you please clarify from which branch do you ship release versions? I expected that you don't ship official releases from develop branch and that's why that pull request is not available in our version of S3Reader2 plugin.
We'r using S3Reader2 with the following header:
```
// Type: ImageResizer.Plugins.S3Reader2.S3Reader2
// Assembly: ImageResizer.Plugins.S3Reader2, Version=4.0.0.0, Culture=neutral, PublicKeyToken=null
// MVID: FBCB569C-2711-4F60-A416-B504F816CEA7
```
It does not contain an additional if else statement in the constructor:
```
public S3Reader2(NameValueCollection args)
: this()
{
this.LoadConfiguration(args);
this.UseHttps = args.Get("useHttps", args.Get("useSsl", this.UseHttps));
this.Region = args.GetAsString("region", "us-east-1");
this.SetAllowedBuckets((IEnumerable) args.GetAsString("buckets", "").Split(new char[1]
{
','
}, StringSplitOptions.RemoveEmptyEntries));
if (!string.IsNullOrEmpty(args["accessKeyId"]) && !string.IsNullOrEmpty(args["secretAccessKey"]))
this.S3Client = new AmazonS3Client(args["accessKeyId"], args["secretAccessKey"], this.s3config);
else
this.S3Client = new AmazonS3Client((AWSCredentials) null, this.s3config);
}
```
Thank you in advance,
Alex.
Upvotes: 1
|
2018/03/15
| 1,346 | 4,758 |
<issue_start>username_0: I have installed opencv-python and dlib. My area of interest is working on facial recognition. So basically trying to extract faces where in I have a ipcam or a webcam. I am not able to access the cam by however possible. The piece of code I am using is
```
import numpy as np
import cv2
cap = cv2.VideoCapture('http://192.168.43.1:8080/video')
#cap = cv2.VideoCapture('http://192.168.43.1:8080/onvif/device_service')
#cap = cv2.VideoCapture('rtsp://192.168.43.1:8080/h264_ulaw.sdp')
#cap = cv2.VideoCapture('rtsp://192.168.43.1:8080/h264_pcm.sdp')
print(cap.isOpened())
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
if ret is True:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY))
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
```
The print statement is always false however I try
In my local windows machine it is working fine. Main purpose of using colab.research.google.com is that my machine doesn't support cmake properly for dlib and face\_recognition<issue_comment>username_1: The python code you're executing via Colab is being executed on a VM; there's no way for that Python process to (safely) communicate directly with a webcam on your local machine.
Upvotes: 2 <issue_comment>username_2: Maybe using public IP address helps open and capture video from the camera usiing CV.
Upvotes: 0 <issue_comment>username_3: >
> because, whenever you run your code on colab, it runs on their server not on your local
> machine. And whatever you have pasted is your local ip. which can be resolved only within your network.
>
>
>
1. First, you have to get your public ip by using `ipconfig` command
2. But, Actually, when you paste the url, google's servertries to
render the video for you. But, your workstation in which the webcam
feed available is behind your gateway. (your private network).
3. Which means that it is not accessible outside of your local network.
Because, it is by default, due to avoid IP Exhausting & Objectively
to provide security to your machine.
Still if you want to expose your machine to the outer world(which is risky). You can look for NAT. (Network Address Translation).
* You have to check with your ISP. whether you have static ip because
usually whenever you switch your network on & off, a new dynamic ip
is assigned to your machine. so, whenever you switch the network, you have to change the ip in the code.
* Take a look here on how to setup : [Link](https://medium.com/botfuel/how-to-expose-a-local-development-server-to-the-internet-c31532d741cc)
Upvotes: 0 <issue_comment>username_4: As said by people, you can not access your IP webcam directly as Google colab runs on virtual machine. Here is what I did for accessing the webcam of my laptop.
```
def take_photo(filename='photo.jpg', quality=0.8):
js = Javascript('''
async function takePhoto(quality) {
const div = document.createElement('div');
const video = document.createElement('video');
video.style.display = 'block';
const stream = await navigator.mediaDevices.getUserMedia({video: true});
document.body.appendChild(div);
div.appendChild(video);
video.srcObject = stream;
await video.play();
// Resize the output to fit the video element.
google.colab.output.setIframeHeight(document.documentElement.scrollHeight, true);
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
canvas.getContext('2d').drawImage(video, 0, 0);
stream.getVideoTracks()[0].stop();
div.remove();
return canvas.toDataURL('image/jpeg', quality);
}
''')
display(js)
data = eval_js('takePhoto({})'.format(quality))
resp = urllib.request.urlopen(data)
image = np.asarray(bytearray(resp.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
# return the image
return image
# to capture an image
img = take_photo()
```
The take\_photo() function uses javascript to access your webcam connected to your system. Hopefully it helps. :)
Upvotes: 0 <issue_comment>username_5: You can use ngrok to make your IP to the public then you can access your camera by ngrok IP.
1. Install ngrok - <https://ngrok.com/download>
2. Use this by - ./ngrok http <http://192.168.43.1:8080/video>
It will generate ngrok IP which will be available for 12 hours.
You can use it by just passing in **cv2.VideoCapture(ngrok\_ip)** this.
Upvotes: 0
|
2018/03/15
| 339 | 1,094 |
<issue_start>username_0: How can I clear a form in HTML/JS without loading external libraries?
Here's what I've tried so far and how they fail:
```
```
This doesn't clear the form, only resets it to its original value.
```
form.getElementsByTagName('textarea').innerHTML = ''
```
This only clears the form if it has not been edited. If the form has been edited, it does nothing. Here's a [JSfiddle](http://jsfiddle.net/m6bcs/) where these 2 have been combined, but if you edit the textarea and click clear, you can see it doesn't work ([source of JSfiddle](https://stackoverflow.com/a/6029442/4490400)).<issue_comment>username_1: Use `text[i].value` instead of `text[i].innerHTML`:
```js
function resetForm(form) {
// clearing inputs
var inputs = form.getElementsByTagName('input');
for (var i = 0; i
```
```html
Check Me
some text
Volvo
Saab
Mercedes
Audi
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try the following way I edited and it clears the textarea <https://gist.github.com/anonymous/76b3c08a2d38ddafc15791501173a6b8>
Upvotes: -1
|
2018/03/15
| 1,729 | 5,495 |
<issue_start>username_0: I want to pass the output of ConvLSTM and Conv2D to a Dense Layer in Keras, what is the difference between using global average pooling and flatten
Both is working in my case.
```python
model.add(ConvLSTM2D(filters=256,kernel_size=(3,3)))
model.add(Flatten())
# or model.add(GlobalAveragePooling2D())
model.add(Dense(256,activation='relu'))
```<issue_comment>username_1: That both seem to work doesn't mean they do the same.
Flatten will take a tensor of any shape and transform it into a one dimensional tensor (plus the samples dimension) but keeping all values in the tensor. For example a tensor (samples, 10, 20, 1) will be flattened to (samples, 10 \* 20 \* 1).
GlobalAveragePooling2D does something different. It applies average pooling on the spatial dimensions until each spatial dimension is one, and leaves other dimensions unchanged. In this case values are not kept as they are averaged. For example a tensor (samples, 10, 20, 1) would be output as (samples, 1, 1, 1), assuming the 2nd and 3rd dimensions were spatial (channels last).
Upvotes: 7 [selected_answer]<issue_comment>username_2: Flattening is No brainer and it simply converts a multi-dimensional object to one-dimensional by re-arranging the elements.
While GlobalAveragePooling is a methodology used for better representation of your vector. It can be 1D/2D/3D. It uses a parser window which moves across the object and pools the data by averaging it (GlobalAveragePooling) or picking max value (GlobalMaxPooling). Padding is essentially required to take the corner cases into the account.
Both are used for taking effect of sequencing into account in a simpler way.
Upvotes: 3 <issue_comment>username_3: You can test **the difference between Flatten and GlobalPooling** on your own making comparison with numpy, if you are more confident
We make a demonstration using, as input, a batch of images with this shape `(batch_dim, height, width, n_channel)`
```
import numpy as np
from tensorflow.keras.layers import *
batch_dim, H, W, n_channels = 32, 5, 5, 3
X = np.random.uniform(0,1, (batch_dim,H,W,n_channels)).astype('float32')
```
* `Flatten` accepts as input tensor of at least 3D. It operates a reshape of the input in 2D with this format `(batch_dim, all the rest)`. In our case of 4D, it operates a reshape in this format `(batch_dim, H*W*n_channels)`.
```
np_flatten = X.reshape(batch_dim, -1) # (batch_dim, H*W*n_channels)
tf_flatten = Flatten()(X).numpy() # (batch_dim, H*W*n_channels)
(tf_flatten == np_flatten).all() # True
```
* `GlobalAveragePooling2D` accepts as input 4D tensor. It operates the mean on the height and width dimensionalities for all the channels. The resulting dimensionality is 2D `(batch_dim, n_channels)`. `GlobalMaxPooling2D` makes the same but with max operation.
```
np_GlobalAvgPool2D = X.mean(axis=(1,2)) # (batch_dim, n_channels)
tf_GlobalAvgPool2D = GlobalAveragePooling2D()(X).numpy() # (batch_dim, n_channels)
(tf_GlobalAvgPool2D == np_GlobalAvgPool2D).all() # True
```
Upvotes: 3 <issue_comment>username_4: What a Flatten layer does
-------------------------
After convolutional operations, [`tf.keras.layers.Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) will reshape a tensor into `(n_samples, height*width*channels)`, for example turning `(16, 28, 28, 3)` into `(16, 2352)`. Let's try it:
```
import tensorflow as tf
x = tf.random.uniform(shape=(100, 28, 28, 3), minval=0, maxval=256, dtype=tf.int32)
flat = tf.keras.layers.Flatten()
flat(x).shape
```
```
TensorShape([100, 2352])
```
What a GlobalAveragePooling layer does
--------------------------------------
After convolutional operations, [`tf.keras.layers.GlobalAveragePooling`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling2D) layer does is average all the values *according to the last axis*. This means that the resulting shape will be `(n_samples, last_axis)`. For instance, if your last convolutional layer had 64 filters, it would turn `(16, 7, 7, 64)` into `(16, 64)`. Let's make the test, after a few convolutional operations:
```
import tensorflow as tf
x = tf.cast(
tf.random.uniform(shape=(16, 28, 28, 3), minval=0, maxval=256, dtype=tf.int32),
tf.float32)
gap = tf.keras.layers.GlobalAveragePooling2D()
for i in range(5):
conv = tf.keras.layers.Conv2D(64, 3)
x = conv(x)
print(x.shape)
print(gap(x).shape)
```
```
(16, 24, 24, 64)
(16, 22, 22, 64)
(16, 20, 20, 64)
(16, 18, 18, 64)
(16, 16, 16, 64)
(16, 64)
```
Which should you use?
---------------------
The `Flatten` layer will always have at least as much parameters as the `GlobalAveragePooling2D` layer. If the final tensor shape before flattening is still large, for instance `(16, 240, 240, 128)`, using `Flatten` will make an insane amount of parameters: `240*240*128 = 7,372,800`. This huge number will be multiplied by the number of units in your next dense layer! At that moment, `GlobalAveragePooling2D` might be preferred in most cases. If you used `MaxPooling2D` and `Conv2D` so much that your tensor shape before flattening is like `(16, 1, 1, 128)`, it won't make a difference. If you're overfitting, you might want to try `GlobalAveragePooling2D`.
Upvotes: 5 <issue_comment>username_5: Compression ratio of parameters is exponentially high in **Global Average Pooling**,**Flatten** just reshape the matrix to one dimension, both can be fed to Fully connected networks
Thanks
Upvotes: 0
|
2018/03/15
| 543 | 1,583 |
<issue_start>username_0: Normally in DB2, if I want to execute a db script and output the execution result to a log file, I will do something as follow:
```
db2 -tvf x.sql > x.log
```
Hence, I can read the x.log to check whether my script is execute correctly or not.
How about Oracle db?
I know that the script is run as follow:
```
SQL>@x.sql
```
But how can I output the execution result like what I do in DB2?
Kindly advise.<issue_comment>username_1: Using [SPOOL](https://docs.oracle.com/cd/B19306_01/server.102/b14357/ch12043.htm)
```
SPOOL filename.txt
@x.sql
SPOOL OFF
```
>
> Stores query results in a file, or optionally sends the file to a printer
>
>
>
Other options you can use:
>
> CRE[ATE]
>
>
> Creates a new file with the name specified.
>
>
> REP[LACE]
>
>
> Replaces the contents of an existing file. If the file does not exist,
> REPLACE creates the file. This is the default behavior.
>
>
> APP[END]
>
>
> Adds the contents of the buffer to the end of the file you specify.
>
>
> OFF
>
>
> Stops spooling.
>
>
> OUT
>
>
> Stops spooling and sends the file to your computer's standard
> (default) printer. This option is not available on some operating
> systems.
>
>
>
Upvotes: 2 <issue_comment>username_2: Seems like a FAQ , try using shell redirection with sqlplus:
```
sqlplus your_connect_string_here @x.sql 1>x.out 2>&1
```
Consider also using inside the script: `set termout on` and the `spool` command to further adjust the behaviour. Refer to the Oracle sqlplus documentation for all the details.
Upvotes: 1
|
2018/03/15
| 677 | 1,909 |
<issue_start>username_0: I am trying to install `rebar3` using `linuxbrew` on Ubuntu 16.04.
After I execute `brew install rebar3`
>
> distutils.errors.CompileError: command 'gcc-5' failed with exit status
> 1
> /home/linuxbrew/.linuxbrew/Cellar/gobject-introspection/1.56.0/share/gobject-introspection-1.0/Makefile.introspection:159:
> recipe for target 'Pango-1.0.gir' failed
>
>
>
This error occurs when trying to install **pango** dependency.
My `$PATH` has `/home/linuxbrew/.linuxbrew/bin:/home/linuxbrew/.linuxbrew/sbin:/home/linuxbrew/.linuxbrew/bin` in it.
After I get this error, I manually installed `pango` using `sudo apt-get install libghc-pango-dev`
GCC version - 5.4.0
Kernel - 4.13.0-37
But still I get the same error again and again.<issue_comment>username_1: Using [SPOOL](https://docs.oracle.com/cd/B19306_01/server.102/b14357/ch12043.htm)
```
SPOOL filename.txt
@x.sql
SPOOL OFF
```
>
> Stores query results in a file, or optionally sends the file to a printer
>
>
>
Other options you can use:
>
> CRE[ATE]
>
>
> Creates a new file with the name specified.
>
>
> REP[LACE]
>
>
> Replaces the contents of an existing file. If the file does not exist,
> REPLACE creates the file. This is the default behavior.
>
>
> APP[END]
>
>
> Adds the contents of the buffer to the end of the file you specify.
>
>
> OFF
>
>
> Stops spooling.
>
>
> OUT
>
>
> Stops spooling and sends the file to your computer's standard
> (default) printer. This option is not available on some operating
> systems.
>
>
>
Upvotes: 2 <issue_comment>username_2: Seems like a FAQ , try using shell redirection with sqlplus:
```
sqlplus your_connect_string_here @x.sql 1>x.out 2>&1
```
Consider also using inside the script: `set termout on` and the `spool` command to further adjust the behaviour. Refer to the Oracle sqlplus documentation for all the details.
Upvotes: 1
|
2018/03/15
| 1,393 | 5,207 |
<issue_start>username_0: I want to make a button. If I click that button it will reset every position as it was. Such as you enter VR. After entering you see some images in front of you. But sometimes because of motion the front scene goes some other position. If you wanna see that again you need to reload that. So I wanna make button that reset every position as it was. How can I do that?
```
```
This is html code. Here is my javaScript
```
AFRAME.registerComponent('resetOrientation', {
init: function () {
var button = document.querySelector("#resetOrientation");
var cameraPosition = document.querySelector("#listener");
var resetRotation = { x: 0, y: 0, z: 0 };
button.addEventListener('click', function () {
var old = cameraPosition.getAttribute('rotation');
console.log(old);
var adjustedRotation = {
x: adjustedRotation.x - old.x,
y: adjustedRotation.y - old.y,
z: adjustedRotation.z - old.z
}
cameraPosition.setAttribute("rotation", adjustedPosition);
});
}
});
```
How can I solve this?<issue_comment>username_1: If you try to adjust the camera itself then it will be instantly overwritten when the user moves the phone.
Instead what you can do is wrap all entities except the camera in an entity then apply a rotation that is the inverse of the current rotation of the camera. You probably also want to apply this solely on the y axis but that's up to you.
```
var invRot = document.querySelector("#listener").getAttribute('rotation');
invRot.y = -invRot.y;
document.querySelector("#world").setAttribute("position", invRot);
```
Here in this example the `world` entity would contain all your content and not the camera.
Upvotes: 0 <issue_comment>username_2: We have a `recenter` component for Supermedium.
Rather than un-rotating the camera, we wrap the whole scene in an entity, and rotate that. Because there was some messiness when trying to read camera rotation in VR, and trying to unset that, with the whole matrix stuff, since three.js handles the VR camera pose. There are also crazy inconsistencies with SteamVR...so there's some special things we do there. Diego spent days on this issue:
```
/**
* Pivot the scene when user enters VR to face the links.
*/
AFRAME.registerComponent('recenter', {
schema: {
target: {default: ''}
},
init: function () {
var sceneEl = this.el.sceneEl;
this.matrix = new THREE.Matrix4();
this.frustum = new THREE.Frustum();
this.rotationOffset = 0;
this.euler = new THREE.Euler();
this.euler.order = 'YXZ';
this.menuPosition = new THREE.Vector3();
this.recenter = this.recenter.bind(this);
this.checkInViewAfterRecenter = this.checkInViewAfterRecenter.bind(this);
this.target = document.querySelector(this.data.target);
// Delay to make sure we have a valid pose.
sceneEl.addEventListener('enter-vr', () => setTimeout(this.recenter, 100));
// User can also recenter the menu manually.
sceneEl.addEventListener('menudown', this.recenter);
sceneEl.addEventListener('thumbstickdown', this.recenter);
window.addEventListener('vrdisplaypresentchange', this.recenter);
},
recenter: function () {
var euler = this.euler;
euler.setFromRotationMatrix(this.el.sceneEl.camera.el.object3D.matrixWorld, 'YXZ');
this.el.object3D.rotation.y = euler.y + this.rotationOffset;
// Check if the menu is in camera frustum in next tick after a frame has rendered.
setTimeout(this.checkInViewAfterRecenter, 0);
},
/*
* Sometimes the quaternion returns the yaw in the [-180, 180] range.
* Check if the menu is in the camera frustum after recenter it to
* decide if we apply an offset or not.
*/
checkInViewAfterRecenter: (function () {
var bottomVec3 = new THREE.Vector3();
var topVec3 = new THREE.Vector3();
return function () {
var camera = this.el.sceneEl.camera;
var frustum = this.frustum;
var menuPosition = this.menuPosition;
camera.updateMatrix();
camera.updateMatrixWorld();
frustum.setFromMatrix(this.matrix.multiplyMatrices(camera.projectionMatrix, camera.matrixWorldInverse));
// Check if menu position (and its bounds) are within the frustum.
// Check bounds in case looking angled up or down, rather than menu central.
menuPosition.setFromMatrixPosition(this.target.object3D.matrixWorld);
bottomVec3.copy(menuPosition).y -= 3;
topVec3.copy(menuPosition).y += 3;
if (frustum.containsPoint(menuPosition) ||
frustum.containsPoint(bottomVec3) ||
frustum.containsPoint(topVec3)) { return; }
this.rotationOffset = this.rotationOffset === 0 ? Math.PI : 0;
// Recenter again with the new offset.
this.recenter();
};
})(),
remove: function () {
this.el.sceneEl.removeEventListener('enter-vr', this.recenter);
}
});
```
In HTML:
```
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: I don't know enough if it's good practice, but for me this works:
```
var camera = document.querySelector("a-camera");
camera.components["look-controls"].pitchObject.rotation.x = newX;
camera.components["look-controls"].yawObject.rotation.y = newY;
```
Upvotes: 2
|
2018/03/15
| 1,989 | 7,667 |
<issue_start>username_0: I have an app with a local db (room) and a service that `POSTs` all the "events" from the database using `retrofit 2` and `rxjava`. When I send a high volume of `POSTs` (ie 1500+), the app throws an `OutOfMemoryException`. I presume this happens because it starts a new thread every time the client sends a new POST. Is there a way I could prevent the `retrofit/ rxJava` creating so many threads? Or is it better to wait for the server to respond? Here is my code:
Class that retrieves all the events from the local db
```
public class RetreiveDbContent {
private final EventDatabase eventDatabase;
public RetreiveDbContent(EventDatabase eventDatabase) {
this.eventDatabase = eventDatabase;
}
@Override
public Maybe> eventsList() {
return eventDatabase.eventDao().getAllEvents()
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread());
}
}
```
next, I have a service that iterates trough the list of db events and posts all of them. If the backend sends back success, that event is deleted from the local db.
```
private void sendDbContent() {
mRetreiveDbContent.eventsList()
.subscribe(new MaybeObserver>() {
@Override
public void onSubscribe(Disposable d) {
}
@Override
public void onSuccess(final List events) {
Timber.e("Size of list from db " + events.size());
final CompositeDisposable disposable = new CompositeDisposable();
Observable eventObservable = Observable.fromIterable(events);
eventObservable.subscribe(new Observer() {
@Override
public void onSubscribe(Disposable d) {
disposable.add(d);
}
@Override
public void onNext(Event event) {
Timber.d("sending event from db " + event.getAction());
mPresenter.postEvent(Event);
}
@Override
public void onError(Throwable e) {
Timber.e("error while emitting db content " + e.getMessage());
}
@Override
public void onComplete() {
Timber.d("Finished looping through db list");
disposable.dispose();
}
});
}
@Override
public void onError(Throwable e) {
Timber.e("Error occurred while attempting to get db content " + e.getMessage());
}
@Override
public void onComplete() {
Timber.d("Finished getting the db content");
}
});
}
```
this is my `postEvent()` & `deleteEvent()` methods that lives in a presenter
```
public void postEvent(final Event event) {
mSendtEvent.sendEvent(event)
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(new DisposableObserver>() {
@Override
public void onNext(Response responseBodyResponse) {
switch (responseBodyResponse.code()) {
case CREATED\_RESPONSE:
Timber.d("Event posted successfully " + responseBodyResponse.code());
deleteEventFromRoom(event);
break;
case BAD\_REQUEST:
Timber.e("Client sent a bad request! We need to discard it!");
break;
}
}
@Override
public void onError(Throwable e) {
Timber.e("Error " + e.getMessage());
mView.onErrorOccurred();
}
@Override
public void onComplete() {
}
});
}
public void deleteEventFromRoom(final Event event) {
final CompositeDisposable disposable = new CompositeDisposable();
mRemoveEvent.removeEvent(event)
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(new Observer() {
@Override
public void onSubscribe(Disposable d) {
disposable.add(d);
}
@Override
public void onNext(Object o) {
Timber.d("Successfully deleted event from database " + event.getAction());
}
@Override
public void onError(Throwable e) {
}
@Override
public void onComplete() {
disposable.dispose();
}
});
}
```
and finally `mRemoveEvent` interactor
```
public class RemoveEvent {
private final EventDatabase eventDatabase;
public RemoveEvent(EventDatabase eventDatabase) {
this.eventDatabase = eventDatabase;
}
@Override
public Observable removeEvent(final Event event) {
return Observable.fromCallable(new Callable() {
@Override
public Object call() throws Exception {
return eventDatabase.eventDao().delete(event);
}
});
}
}
```
Note: I'm a newbie in the `RXJava` world.
Thank you in advance<issue_comment>username_1: You haven't added any error log here so i don't know the exact cause of the problem but according to your code, You're fetching all events from your localdb and
then you're iterating the events from the list and sending each event to the server and then handling the response.
Now you're saying that with 400-500 entries the code works fine and with ~1500 events it crashes, Now you need to understand that in both the cases your network is sending one event at a time to the server so maybe the problems lies with your approach of fetching all the data at once.
So instead of fetching all data from the localdb at once, you should take one event at a time and then upload it to the server.
Upvotes: 0 <issue_comment>username_2: You are using `Observable` which does not support backpressure.
Fom RxJava github page:
>
> Backpressure
>
>
> When the dataflow runs through asynchronous steps, each step may
> perform different things with different speed. To avoid overwhelming
> such steps, which usually would manifest itself as **increased memory
> usage due to temporary buffering** or the need for skipping/dropping
> data, a so-called backpressure is applied, which is a form of flow
> control where the steps can express how many items are they ready to
> process. This allows constraining the memory usage of the dataflows in
> situations where there is generally no way for a step to know how many
> items the upstream will send to it.
>
>
> In RxJava, the dedicated **Flowable** class is designated to support
> backpressure and Observable is dedicated for the non-backpressured
> operations (short sequences, GUI interactions, etc.). The other types,
> Single, Maybe and Completable don't support backpressure nor should
> they; there is always room to store one item temporarily.
>
>
>
You should use `Flowable`, You are sending all the events to downstream to get processed with all the resources available.
Here is a simple example:
```
Flowable.range(1, 1000)
.buffer(10)//Optional you can process single event
.flatMap(buf -> {
System.out.println(String.format("100ms for sending events to server: %s ", buf));
Thread.sleep(100);
return Flowable.fromIterable(buf);
}, 1)// <-- How many concurrent task should be executed
.map(x -> x + 1)
.doOnNext(i -> System.out.println(String.format("doOnNext: %d", i)))
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.single(), false, 1)//Overrides the 128 default buffer size
.subscribe(new DefaultSubscriber() {
@Override
public void onStart() {
request(1);
}
@Override
public void onNext(Integer t) {
System.out.println(String.format("Received response from server for event : %d", t));
System.out.println("Processing value would take some time");
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
//You can request for more data here
request(1);
}
@Override
public void onError(Throwable t) {
t.printStackTrace();
}
@Override
public void onComplete() {
System.out.println("ExampleUnitTest.onComplete");
}
});
```
And the last tip: You should not fetch the whole events into memory at once, basically you are holding all the "Database Event" in memory, consider paging or something like `Cursor`, fetch 100 rows per operation and after processing them request for the next 100, and I'm hoping you're doing this with JobScheduler or WorkManager API
Upvotes: 4 [selected_answer]
|
2018/03/15
| 381 | 1,307 |
<issue_start>username_0: I am just starting and implementing template in codeigniter 3.1
after login when i click on any side menu then it doesn't work. if i tried to open in new window then its automatically logout & redirect to login page.
my view is following.
```
- [">
Dashboard](<?php echo base_url().)
```
my controller user have function name admin have following code.
```
function admin(){
$user_login=array(
'user_email'=>$this->input->post('user_email'),
'user_password'=>md5($this->input->post('user_password'))
);
$data=$this->user_model->login_user($user_login['user_email'],$user_login['user_password']);
if($data)
{
$this->session->set_userdata('user_id',$data['user_id']);
$this->load->view('template/header');
$this->load->view('template/sidemenu');
$this->load->view('body');
$this->load->view('template/footer');
}
else{
$this->session->set_flashdata('error_msg', 'Wrong Credentials...');
$this->load->view("login.php");
}
}
```<issue_comment>username_1: You can try this:
Open application > config > autoload.php
```
$autoload['helper'] = array('url');
```
Upvotes: 0 <issue_comment>username_2: is this 'base\_url()."user/admin' your dashboard link?? because in your user controller there is login or sign in code.
Upvotes: 1
|
2018/03/15
| 770 | 3,101 |
<issue_start>username_0: I am trying to display multiple different modals on my dashboard, but I've noticed that most times the modal screen will open twice. (I am not even able to replicate this issue consistently as sometimes it does NOT open two modals after I cancel ng serve and run it again for the first time. Ng serve auto recompiling after changes will 100% always make the two modals appear).
I call the modal on a (click) function in my dashboard that looks like this
```
openConfigureModal(soort: string){
this.dashboardService.openConfigureModal.next(soort);
```
The entirety of the service is simply this
```
@Injectable()
export class DashboardService {
constructor() { }
openConfigureModal = new Subject();
}
```
In the modalcomponent I am opening two different templates (I did this in two different components alltogether at first, the issue then arose and I figured maybe this would solve it, it did not)
```
export class ConfigureAppModalComponent implements OnInit {
constructor(private dashboardService: DashboardService, private modalService: BsModalService) { }
modalRef: BsModalRef;
@ViewChild("templateConfigure") template;
@ViewChild("templateSync") template2;
ngOnInit() {
const config = {
class: 'modal-dialog-centered modal-lg',
};
const config2 = {
class: 'modal-dialog-centered roundedborder modal-sm',
};
this.dashboardService.openConfigureModal.subscribe( data => {
if(data == "google"){
//set some values
this.modalRef = this.modalService.show(this.template, config);
}else if(data == "outlook"){
//set some values
this.modalRef = this.modalService.show(this.template, config);
}else{
this.modalRef = this.modalService.show(this.template2, config2);
}
})
}
```
With the templates just looking like this
```
sth here
sth here
sth else here
sth else here
```
I also only show the component once in my dashboard.component.html like this
```
```
Ngx-bootstrap version 2.0.2
Bootstrap version 4.0.0
Any other things not clear feel free to ask. Thank you for reading!<issue_comment>username_1: Fixed my own issue by simply using takeUntils and OnDestroy and properly destroying each modal in their respective .ts files :)
I'll leave the question up in case anyone runs into the same issue because I hadn't found it anywhere else before I posted this.
Upvotes: 2 [selected_answer]<issue_comment>username_2: The modal service will remove a subscription which has to be disposed when the component is removed from the DOM.
At the top level of your component you can create a subscription.
```
private subscription = new Subscription();
```
Now you can collect the subscription generated on the openConfigurationModal call.
```
const modalSubscription = this.dashboardService.openConfigureModal.subscribe( data => { });
this.subscription.add(modalSubscription);
```
In onDestroy() you can dispose this subscription.
```
public ngOnDestroy() {
this.subscription.unsubscribe();
}
```
Hope it helps :)
Upvotes: 0
|
2018/03/15
| 1,432 | 4,650 |
<issue_start>username_0: Error:Execution failed for task ':app:preDebugBuild'.
>
> Android dependency 'com.google.android.gms:play-services-ads' has different version for the compile (11.8.0) and runtime (11.0.4) classpath. You should manually set the same version via DependencyResolution
>
>
>
My project gradle:
```
buildscript {
repositories {
jcenter()
google()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
classpath 'com.google.gms:google-services:3.0.0'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
jcenter()
google()
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
```
My module gradle:
```
apply plugin: 'com.android.application'
android {
compileSdkVersion 25
buildToolsVersion '26.0.2'
defaultConfig {
applicationId 'com.bezets.cityappar'
minSdkVersion 16
targetSdkVersion 25
versionCode 4
versionName '1.3.0'
multiDexEnabled true
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
dexOptions {
javaMaxHeapSize "2g"
}
packagingOptions {
exclude 'META-INF/LICENSE'
exclude 'META-INF/LICENSE.txt'
exclude 'META-INF/LICENSE-FIREBASE.txt'
}
productFlavors {
}
lintOptions {
disable 'InvalidPackage'
abortOnError false
}
}
repositories {
mavenCentral()
}
dependencies {
implementation 'com.google.android.gms:play-services-ads:11.8.0'
implementation 'com.google.firebase:firebase-messaging:11.8.0'
compile fileTree(include: ['*.jar'], dir: 'libs')
androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', {
exclude group: 'com.android.support', module: 'support-annotations'
})
compile 'com.android.support:appcompat-v7:25.3.1'
compile 'com.android.support:design:25.3.1'
compile 'com.android.support.constraint:constraint-layout:1.0.2'
compile 'com.android.support:recyclerview-v7:25.3.1'
compile 'com.android.support:cardview-v7:25.3.1'
compile 'com.android.support:palette-v7:25.3.1'
compile 'com.jakewharton:butterknife:7.0.1'
compile files('libs/volley.jar')
compile 'com.android.support:multidex:1.0.1'
compile 'com.google.firebase:firebase-core:11.0.4'
compile 'com.google.firebase:firebase-database:11.0.4'
compile 'com.google.firebase:firebase-storage:11.0.4'
compile 'com.google.android.gms:play-services-auth:11.0.4'
compile 'com.google.android.gms:play-services-maps:11.0.4'
compile 'com.google.android.gms:play-services-location:11.0.4'
compile 'com.google.maps.android:android-maps-utils:0.4'
compile 'com.google.firebase:firebase-auth:11.0.4'
compile 'com.google.firebase:firebase-crash:11.0.4'
compile 'com.google.firebase:firebase-ads:11.0.4'
compile 'com.squareup.retrofit:retrofit:1.9.0'
compile 'com.squareup.okhttp:okhttp:2.3.0'
compile 'com.squareup:otto:1.3.6'
compile 'com.github.bumptech.glide:glide:3.7.0'
compile 'uk.co.chrisjenx:calligraphy:2.2.0'
compile 'com.nineoldandroids:library:2.4.0'
compile 'com.github.paolorotolo:appintro:3.3.0'
compile 'com.facebook.android:facebook-android-sdk:[4,5)'
testCompile 'junit:junit:4.12'
}
apply plugin: 'com.google.gms.google-services'
```
How can I fix it error?<issue_comment>username_1: Set all **Google/Firebase** dependency to Same version **11.8.0**
Upvotes: 0 <issue_comment>username_2: Please see this [answer](https://stackoverflow.com/a/47383261/9660285)
You can solve it in one of two ways: Define a resolution strategy or include the offending version in your dependencies.
Hope this helps!
Upvotes: 1 <issue_comment>username_3: My error is similar with yours, solved with these way.
Error is shown below :
>
> Android dependency 'com.google.android.gms:play-services-tasks' has different version for thecompile (11.4.2) and runtime (15.0.1) classpath. You should manually set the same version via DependencyResolution
>
>
>
**Then in file : android/build.gradle ,
I add this script :**
```
allprojects {
...
configurations.all {
resolutionStrategy.force "com.google.android.gms:play-services-
tasks:15.0.1"
}
}
```
Upvotes: 4
|
2018/03/15
| 545 | 1,750 |
<issue_start>username_0: I added a schema extension to users in my org, to keep track of training a user has taken. Since lists are not supported I am trying to store this as a comma separated string, as follows:
```
{
"id": "voctestextension",
"description": "voc test extension",
"targetTypes": ["User"],
"properties": [
{
"name": "trainings",
"type": "String"
}
]
}
```
Now, while trying to fetch the users who have taken training 'X' I am making the below call:
`https://graph.microsoft.com/v1.0/users?$filter=contains(extrw7rtbc9_voctestextension/trainings, 'Azure'), $select=extrw7rtbc9_voctestextension,displayName`
This doesn't give the correct response, but throws this error:
```
{
"error": {
"code": "Request_UnsupportedQuery",
"message": "Unsupported Query.",
"innerError": {
"request-id": "dc3fda19-6464-43d9-95ce-54a0567bf5a9",
"date": "2018-03-15T09:14:30"
}
}
}
```
From different forum answers, I understand that `contains` is not supported. Can you suggest a better way to track this info in the user's profile?<issue_comment>username_1: Contains is not supported. You need to use startswith or add multiple properties like training1, training2, training3.. and then use filter with ORs and EQs.
```
https://graph.microsoft.com/v1.0/users?$filter(extrw7rtbc9_voctestextension/trainign1 eq 'Azure' or extrw7rtbc9_voctestextension/trainign2 eq 'Azure')
```
Upvotes: 1 <issue_comment>username_2: username_1s solution is working (almost) perfectly, but the "=" is missing after "$filter":
```
https://graph.microsoft.com/v1.0/users?$filter=(extrw7rtbc9_voctestextension/trainign1 eq 'Azure' or extrw7rtbc9_voctestextension/trainign2 eq 'Azure')
```
Upvotes: 0
|
2018/03/15
| 337 | 1,022 |
<issue_start>username_0: I want to simply subtract 2 matrices with size 784×1
with this code
```
w2 = G.w - alpha *temp
print(w2.size)
```
but `w2` is a 784×784 matrix. why doesn't element-wise subtraction work properly?
both temp and `G.w` are 784×1 matrices and `alpha` is a scalar (`alpha = 0.1`)
I'm using pycharm on windows 10.
rethink about creating G.w and temp<issue_comment>username_1: Contains is not supported. You need to use startswith or add multiple properties like training1, training2, training3.. and then use filter with ORs and EQs.
```
https://graph.microsoft.com/v1.0/users?$filter(extrw7rtbc9_voctestextension/trainign1 eq 'Azure' or extrw7rtbc9_voctestextension/trainign2 eq 'Azure')
```
Upvotes: 1 <issue_comment>username_2: username_1s solution is working (almost) perfectly, but the "=" is missing after "$filter":
```
https://graph.microsoft.com/v1.0/users?$filter=(extrw7rtbc9_voctestextension/trainign1 eq 'Azure' or extrw7rtbc9_voctestextension/trainign2 eq 'Azure')
```
Upvotes: 0
|
2018/03/15
| 2,643 | 11,522 |
<issue_start>username_0: 1] 4 digit input boxes without input:
[](https://i.stack.imgur.com/4Dbrb.png)
2] 4 digit input boxes with input :
[](https://i.stack.imgur.com/ilMWz.png)
As the user goes on entering 4-digit pin input box is turned into the dot, if the user wants to delete the last digit pressing back button user input gets deleted with dots again converting to the input box. If the user has middle input incorrect he can't convert dot into input box by touching it he has to traverse back deleting from right-to-left.
I see possible solution having [the custom widget](https://developer.android.com/guide/topics/ui/custom-components.html) with setting views visible/gone based on user input detected via implementing [TextWatcher](https://developer.android.com/reference/android/text/TextWatcher.html). But does existing [EditText widget](https://developer.android.com/reference/android/widget/EditText.html) in Android platform has any style configuration by making use of which I can code in XML to get above-mentioned UI change?
So far for the manual change, I have done following changes ( Code given for one input box this has been used to create group of 4 input boxes) :
XML layout for single input box :
```
```
Text change listener :
```
pin_edit_box.addTextChangedListener(new TextWatcher() {
@Override
public void beforeTextChanged(CharSequence charSequence, int i, int i1, int i2) {
}
@Override
public void onTextChanged(CharSequence charSequence, int i, int i1, int i2) {
}
@Override
public void afterTextChanged(Editable editable) {
int length = editable.toString().length();
if (length == 1) {
pin_edit_box_this.setVisibility(View.INVISIBLE);
pin_input_dot_this.setVisibility(View.VISIBLE);
pin_edit_box_next.requestFocus();
} else if (length == 0) {
// This part doesn't work as EditText is invisible at this time
// dot image view is visible
pin_edit_box_this.setVisibility(View.VISIBLE);
pin_input_dot_this.setVisibility(View.GONE);
pin_edit_box_this_previous.requestFocus();
}
}
});
```
When trying to detect back or delete button event from
```
boolean onKeyDown (int keyCode,
KeyEvent event) {
}
```
The event of keyboard `KeyEvent.KEYCODE_DEL` never gets detected to make views visible/invisible after the back press. Official documentation says will not always work the soft-keyboard events may not always get delivered.
>
> As soft input methods can use multiple and inventive ways of inputting
> text, there is no guarantee that any key press on a soft keyboard will
> generate a key event: this is left to the IME's discretion, and in
> fact sending such events is discouraged.
>
>
>
[More details here.](https://developer.android.com/reference/android/view/KeyEvent.html)
Note: Screenshots above are taken from [BHIM application](https://play.google.com/store/apps/details?id=in.org.npci.upiapp&hl=en) which is for Indian money transactions.<issue_comment>username_1: See this [PinView Library](https://github.com/ChaosLeong/PinView), it does almost the same as you want.
It's easy to use :
Add the `dependencies`
```
dependencies {
compile 'com.chaos.view:pinview:1.3.0'
}
```
Instead of `EditText` create a `PinView`
```
```
You should see something like this
[](https://i.stack.imgur.com/9ES63.gif)
Other workaround is to create your own `EditText` class and change it when user types
Upvotes: 4 [selected_answer]<issue_comment>username_2: Try this
```
xml version="1.0" encoding="utf-8"?
```
**@drawable/norma**
```
```
**drawable.filled**
```
```
**ACTIVITY CODE**
```
import android.os.Handler;
import android.support.v4.content.ContextCompat;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.text.Editable;
import android.text.InputType;
import android.text.TextUtils;
import android.text.TextWatcher;
import android.widget.EditText;
public class MainActivity extends AppCompatActivity {
EditText edt1, edt2, edt3, edt4;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
edt1 = findViewById(R.id.edt1);
edt2 = findViewById(R.id.edt2);
edt3 = findViewById(R.id.edt3);
edt4 = findViewById(R.id.edt4);
edt1.addTextChangedListener(new TextWatcher() {
@Override
public void beforeTextChanged(CharSequence charSequence, int i, int i1, int i2) {
}
@Override
public void onTextChanged(CharSequence charSequence, int i, int i1, int i2) {
if (TextUtils.isEmpty(edt1.getText().toString().trim())) {
edt1.setBackgroundResource(R.drawable.normal);
edt1.requestFocus();
edt1.setTextColor(ContextCompat.getColor(MainActivity.this, R.color.black));
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
edt1.setInputType(InputType.TYPE_CLASS_TEXT );
}
},500);
} else {
edt1.setBackgroundResource(R.drawable.filled);
edt1.setTextColor(ContextCompat.getColor(MainActivity.this, R.color.red));
edt2.requestFocus();
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
edt1.setInputType(InputType.TYPE_CLASS_TEXT |
InputType.TYPE_TEXT_VARIATION_PASSWORD);
}
},500);
}
}
@Override
public void afterTextChanged(Editable editable) {
}
});
edt2.addTextChangedListener(new TextWatcher() {
@Override
public void beforeTextChanged(CharSequence charSequence, int i, int i1, int i2) {
}
@Override
public void onTextChanged(CharSequence charSequence, int i, int i1, int i2) {
if (TextUtils.isEmpty(edt2.getText().toString().trim())) {
edt2.setBackgroundResource(R.drawable.normal);
edt2.requestFocus();
edt2.setTextColor(ContextCompat.getColor(MainActivity.this, R.color.black));
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
edt2.setInputType(InputType.TYPE_CLASS_TEXT );
}
},500);
} else {
edt2.setBackgroundResource(R.drawable.filled);
edt3.requestFocus();
edt2.setTextColor(ContextCompat.getColor(MainActivity.this, R.color.red));
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
edt2.setInputType(InputType.TYPE_CLASS_TEXT |
InputType.TYPE_TEXT_VARIATION_PASSWORD);
}
},500);
}
}
@Override
public void afterTextChanged(Editable editable) {
}
});
edt3.addTextChangedListener(new TextWatcher() {
@Override
public void beforeTextChanged(CharSequence charSequence, int i, int i1, int i2) {
}
@Override
public void onTextChanged(CharSequence charSequence, int i, int i1, int i2) {
if (TextUtils.isEmpty(edt3.getText().toString().trim())) {
edt3.setBackgroundResource(R.drawable.normal);
edt3.requestFocus();
edt3.setTextColor(ContextCompat.getColor(MainActivity.this, R.color.black));
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
edt3.setInputType(InputType.TYPE_CLASS_TEXT );
}
},500);
} else {
edt3.setBackgroundResource(R.drawable.filled);
edt3.setTextColor(ContextCompat.getColor(MainActivity.this, R.color.red));
edt4.requestFocus();
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
edt3.setInputType(InputType.TYPE_CLASS_TEXT |
InputType.TYPE_TEXT_VARIATION_PASSWORD);
}
},500);
}
}
@Override
public void afterTextChanged(Editable editable) {
}
});
edt4.addTextChangedListener(new TextWatcher() {
@Override
public void beforeTextChanged(CharSequence charSequence, int i, int i1, int i2) {
}
@Override
public void onTextChanged(CharSequence charSequence, int i, int i1, int i2) {
if (TextUtils.isEmpty(edt4.getText().toString().trim())) {
edt4.setBackgroundResource(R.drawable.normal);
edt4.setTextColor(ContextCompat.getColor(MainActivity.this, R.color.black));
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
edt4.setInputType(InputType.TYPE_CLASS_TEXT );
}
},500);
} else {
edt4.setBackgroundResource(R.drawable.filled);
edt4.setTextColor(ContextCompat.getColor(MainActivity.this, R.color.red));
edt4.clearFocus();
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
edt4.setInputType(InputType.TYPE_CLASS_TEXT |
InputType.TYPE_TEXT_VARIATION_PASSWORD);
}
},500);
}
}
@Override
public void afterTextChanged(Editable editable) {
}
});
}
}
```
**OUTPUT**
**NORMAL** [](https://i.stack.imgur.com/9Irdm.png)
**WHEN EDIT** [](https://i.stack.imgur.com/qMGlD.png)
**FINAL** [](https://i.stack.imgur.com/xmlip.png)
Upvotes: 2 <issue_comment>username_3: Change your Edittext input type into textpassword, its will show char as dot char.
Upvotes: 0
|
2018/03/15
| 9,615 | 23,300 |
<issue_start>username_0: I'm looking for use `Spark` based on `Hadoop Multinodes` and I have a question about my pythonic script with cluster mode.
### My Configuration :
I have into my Hadoop Cluster :
* 1 Namenode (master)
* 2 Datanodes (slaves)
So I would like to execute my script in Python in order to use this cluster. I know that Spark could be used as standalone mode, but I would like to use my nodes.
### My python script :
It's a very simple script which let to count words in my text.
```
import sys
from pyspark import SparkContext
sc = SparkContext()
lines = sc.textFile(sys.argv[1])
words = lines.flatMap(lambda line: line.split(' '))
words_with_1 = words.map(lambda word: (word, 1))
word_counts = words_with_1.reduceByKey(lambda count1, count2: count1 + count2)
result = word_counts.collect()
for (word, count) in result:
print word.encode("utf8"), count
```
### My Spark command :
In order to use Spark, I do :
```
time ./bin/spark-submit --master spark://master:7077 /home/hduser/count.py /data.txt
```
But, this command lets to execute Spark in standalone mode right ?
How I can execute Spark using my Hadoop Cluster (e.g yarn) and make parallel and distributed computing on my cluster ?
I tried :
```
time ./bin/spark-submit --master yarn /home/hduser/count.py /data.txt
```
And I get issues :
```
2018-03-15 10:13:14 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-03-15 10:13:15 INFO SparkContext:54 - Running Spark version 2.3.0
2018-03-15 10:13:15 INFO SparkContext:54 - Submitted application: count.py
2018-03-15 10:13:15 INFO SecurityManager:54 - Changing view acls to: hduser
2018-03-15 10:13:15 INFO SecurityManager:54 - Changing modify acls to: hduser
2018-03-15 10:13:15 INFO SecurityManager:54 - Changing view acls groups to:
2018-03-15 10:13:15 INFO SecurityManager:54 - Changing modify acls groups to:
2018-03-15 10:13:15 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); groups with view permissions: Set(); users with modify permissions: Set(hduser)$
2018-03-15 10:13:16 INFO Utils:54 - Successfully started service 'sparkDriver' on port 40388.
2018-03-15 10:13:16 INFO SparkEnv:54 - Registering MapOutputTracker
2018-03-15 10:13:16 INFO SparkEnv:54 - Registering BlockManagerMaster
2018-03-15 10:13:16 INFO BlockManagerMasterEndpoint:54 - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2018-03-15 10:13:16 INFO BlockManagerMasterEndpoint:54 - BlockManagerMasterEndpoint up
2018-03-15 10:13:16 INFO DiskBlockManager:54 - Created local directory at /tmp/blockmgr-b131528e-849e-4ba7-94fe-c552572f12fc
2018-03-15 10:13:16 INFO MemoryStore:54 - MemoryStore started with capacity 413.9 MB
2018-03-15 10:13:16 INFO SparkEnv:54 - Registering OutputCommitCoordinator
2018-03-15 10:13:17 INFO log:192 - Logging initialized @5400ms
2018-03-15 10:13:17 INFO Server:346 - jetty-9.3.z-SNAPSHOT
2018-03-15 10:13:17 INFO Server:414 - Started @5667ms
2018-03-15 10:13:17 INFO AbstractConnector:278 - Started ServerConnector@4f835332{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2018-03-15 10:13:17 INFO Utils:54 - Successfully started service 'SparkUI' on port 4040.
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2f867b0c{/jobs,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2a0105b7{/jobs/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3fd04590{/jobs/job,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2637750b{/jobs/job/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@439f0c7{/stages,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3978d915{/stages/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@596dc76d{/stages/stage,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@7054d173{/stages/stage/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@47b526bb{/stages/pool,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@7896fc75{/stages/pool/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2fd54632{/storage,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@79dcd5f2{/storage/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1732b48c{/storage/rdd,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5888874b{/storage/rdd/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5de9bebe{/environment,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@428593b4{/environment/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4011c9bc{/executors,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5cbfbc2a{/executors/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4c33f54d{/executors/threadDump,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@22c5d74c{/executors/threadDump/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@6cd7b681{/static,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5ee342f2{/,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4d68a347{/api,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1e878af1{/jobs/job/kill,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@590aa379{/stages/stage/kill,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO SparkUI:54 - Bound SparkUI to 0.0.0.0, and started at http://master:4040
2018-03-15 10:13:19 INFO RMProxy:98 - Connecting to ResourceManager at master/172.30.10.64:8050
2018-03-15 10:13:20 INFO Client:54 - Requesting a new application from cluster with 3 NodeManagers
2018-03-15 10:13:20 INFO Client:54 - Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
2018-03-15 10:13:20 INFO Client:54 - Will allocate AM container, with 896 MB memory including 384 MB overhead
2018-03-15 10:13:20 INFO Client:54 - Setting up container launch context for our AM
2018-03-15 10:13:20 INFO Client:54 - Setting up the launch environment for our AM container
2018-03-15 10:13:20 INFO Client:54 - Preparing resources for our AM container
2018-03-15 10:13:24 WARN Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
2018-03-15 10:13:29 INFO Client:54 - Uploading resource file:/tmp/spark-bbfad5cb-3d29-4f45-a1a9-2e37f2c76606/__spark_libs__580552500091841387.zip -> hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007/__s$
2018-03-15 10:13:33 INFO Client:54 - Uploading resource file:/usr/local/spark/python/lib/pyspark.zip -> hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007/pyspark.zip
2018-03-15 10:13:33 INFO Client:54 - Uploading resource file:/usr/local/spark/python/lib/py4j-0.10.6-src.zip -> hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007/py4j-0.10.6-src.zip
2018-03-15 10:13:34 INFO Client:54 - Uploading resource file:/tmp/spark-bbfad5cb-3d29-4f45-a1a9-2e37f2c76606/__spark_conf__7840630163677580304.zip -> hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007/__$
2018-03-15 10:13:34 INFO SecurityManager:54 - Changing view acls to: hduser
2018-03-15 10:13:34 INFO SecurityManager:54 - Changing modify acls to: hduser
2018-03-15 10:13:34 INFO SecurityManager:54 - Changing view acls groups to:
2018-03-15 10:13:34 INFO SecurityManager:54 - Changing modify acls groups to:
2018-03-15 10:13:34 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser); groups with view permissions: Set(); users with modify permissions: Set(hduser)$
2018-03-15 10:13:34 INFO Client:54 - Submitting application application_1521023754917_0007 to ResourceManager
2018-03-15 10:13:34 INFO YarnClientImpl:251 - Submitted application application_1521023754917_0007
2018-03-15 10:13:34 INFO SchedulerExtensionServices:54 - Starting Yarn extension services with app application_1521023754917_0007 and attemptId None
2018-03-15 10:13:35 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:35 INFO Client:54 -
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1521105214408
final status: UNDEFINED
tracking URL: http://master:8088/proxy/application_1521023754917_0007/
user: hduser
2018-03-15 10:13:36 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:37 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:38 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:39 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:40 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:41 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:42 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:43 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:44 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:45 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:46 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:47 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:48 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:49 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:50 INFO Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:51 INFO Client:54 - Application report for application_1521023754917_0007 (state: FAILED)
2018-03-15 10:13:51 INFO Client:54 -
client token: N/A
diagnostics: Application application_1521023754917_0007 failed 2 times due to AM Container for appattempt_1521023754917_0007_000002 exited with exitCode: -103
For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1521023754917_0007Then, click on links to logs of each attempt.
Diagnostics: Container [pid=9363,containerID=container_1521023754917_0007_02_000001] is running beyond virtual memory limits. Current usage: 147.7 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing cont$
Dump of the process-tree for container_1521023754917_0007_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 9369 9363 9363 9363 (java) 454 16 2250776576 37073 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/co$
|- 9363 9361 9363 9363 (bash) 0 0 12869632 742 /bin/bash -c /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0$
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1521105214408
final status: FAILED
tracking URL: http://master:8088/cluster/app/application_1521023754917_0007
user: hduser
2018-03-15 10:13:51 INFO Client:54 - Deleted staging directory hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007
2018-03-15 10:13:51 ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext.(SparkContext.scala:500)
at org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
2018-03-15 10:13:51 INFO AbstractConnector:318 - Stopped Spark@4f835332{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2018-03-15 10:13:51 INFO SparkUI:54 - Stopped Spark web UI at http://master:4040
2018-03-15 10:13:51 WARN YarnSchedulerBackend$YarnSchedulerEndpoint:66 - Attempted to request executors before the AM has registered!
2018-03-15 10:13:51 INFO YarnClientSchedulerBackend:54 - Shutting down all executors
2018-03-15 10:13:51 INFO YarnSchedulerBackend$YarnDriverEndpoint:54 - Asking each executor to shut down
2018-03-15 10:13:51 INFO SchedulerExtensionServices:54 - Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
2018-03-15 10:13:51 INFO YarnClientSchedulerBackend:54 - Stopped
2018-03-15 10:13:51 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2018-03-15 10:13:51 INFO MemoryStore:54 - MemoryStore cleared
2018-03-15 10:13:51 INFO BlockManager:54 - BlockManager stopped
2018-03-15 10:13:51 INFO BlockManagerMaster:54 - BlockManagerMaster stopped
2018-03-15 10:13:51 WARN MetricsSystem:66 - Stopping a MetricsSystem that is not running
2018-03-15 10:13:51 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2018-03-15 10:13:52 INFO SparkContext:54 - Successfully stopped SparkContext
Traceback (most recent call last):
File "/home/hduser/count.py", line 4, in
sc = SparkContext()
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 118, in \_\_init\_\_
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 180, in \_do\_init
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 270, in \_initialize\_context
File "/usr/local/spark/python/lib/py4j-0.10.6-src.zip/py4j/java\_gateway.py", line 1428, in \_\_call\_\_
File "/usr/local/spark/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get\_return\_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext.(SparkContext.scala:500)
at org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
2018-03-15 10:13:52 INFO ShutdownHookManager:54 - Shutdown hook called
2018-03-15 10:13:52 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-bbfad5cb-3d29-4f45-a1a9-2e37f2c76606
2018-03-15 10:13:52 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-f5d31d54-e456-4fcb-bf48-9f950233ad4b
```
I'm getting all the time `FAILED` when I want to use my cluster with Spark
[](https://i.stack.imgur.com/UdjGe.png)
Finally I tried :
```
time ./bin/spark-submit --master yarn --deploy-mode cluster /home/hduser/count.py /data.txt
```
But I get one more time issues.
I don't understand something ? I'm very new with Big Data so it's possible :/
**EDIT :**
This is what I obtain with : `yarn application -status application_1521023754917_0007`
```
18/03/15 10:52:07 INFO client.RMProxy: Connecting to ResourceManager at master/172.30.10.64:8050
18/03/15 10:52:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Application Report :
Application-Id : application_1521023754917_0007
Application-Name : count.py
Application-Type : SPARK
User : hduser
Queue : default
Start-Time : 1521105214408
Finish-Time : 1521105231067
Progress : 0%
State : FAILED
Final-State : FAILED
Tracking-URL : http://master:8088/cluster/app/application_1521023754917_0007
RPC Port : -1
AM Host : N/A
Aggregate Resource Allocation : 16329 MB-seconds, 15 vcore-seconds
Diagnostics : Application application_1521023754917_0007 failed 2 times due to AM Container for appattempt_1521023754917_0007_000002 exited with exitCode: -103
For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1521023754917_0007Then, click on links to logs of each attempt.
Diagnostics: Container [pid=9363,containerID=container_1521023754917_0007_02_000001] is running beyond virtual memory limits. Current usage: 147.7 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1521023754917_0007_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 9369 9363 9363 9363 (java) 454 16 2250776576 37073 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/container_1521023754917_0007_02_000001/tmp -Dspark.yarn.app.container.log.dir=/usr/local/hadoop-2.7.5/logs/userlogs/application_1521023754917_0007/container_1521023754917_0007_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg master:40388 --properties-file /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/container_1521023754917_0007_02_000001/__spark_conf__/__spark_conf__.properties
|- 9363 9361 9363 9363 (bash) 0 0 12869632 742 /bin/bash -c /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/container_1521023754917_0007_02_000001/tmp -Dspark.yarn.app.container.log.dir=/usr/local/hadoop-2.7.5/logs/userlogs/application_1521023754917_0007/container_1521023754917_0007_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 'master:40388' --properties-file /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/container_1521023754917_0007_02_000001/__spark_conf__/__spark_conf__.properties 1> /usr/local/hadoop-2.7.5/logs/userlogs/application_1521023754917_0007/container_1521023754917_0007_02_000001/stdout 2> /usr/local/hadoop-2.7.5/logs/userlogs/application_1521023754917_0007/container_1521023754917_0007_02_000001/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
```<issue_comment>username_1: for me this spark submit run the python on all spark nodes:
```
spark-submit --master yarn
--deploy-mode cluster
--num-executors 1
--driver-memory 2g
--executor-memory 1g
--executor-cores 1
hdfs://:/home/hduser/count.py /data.txt
```
The Spark environment need to be extended with:
export PYSPARK\_PYTHON=/opt/bin/python
Furthermore the py file need to be located on hdfs so that all spark nodes in the cluster can read it. The py file need to be accessible for the spark user.
Upvotes: 1 <issue_comment>username_2: you have to first locate the py scrip in the HDFS location. With the name node correct URL
like hdfs dfs -ls hdfs://hostname:1543/
if you see your file is reflecting in the screen then this is the correct path.
Next execute
/bin/spark-submit --master yarn hdfs://COMPLETEHOSTNAME:1543/count.py /data.txt
it will surely work.
Upvotes: 0
|
2018/03/15
| 589 | 2,701 |
<issue_start>username_0: I have a java servlet that sets a session variable and calls an starts a thread class.I implement is a follows
```
@WebServlet("/ExportLogs")
public class ExportLogs extends HttpServlet
{
public void doGet(HttpServletRequest request , HttpServletResponse response) throws ServletException,IOException
{
Integer completePercent = new Integer(10);
request.getSession().setAttribute("CompletionStatus" , completePercent);
LogExportingProcess export = new LogExportingProcess();
export.start();
}
}
```
and i have the thread class that performs a long process as follows ;
```
class LogExportingProcess extends Thread
{
public void run()
{
//i want to change the value of the percent complete variable in here.
}
}
```
Now i want to change the value of the completePercent value inside the LogExportingProcess class.How can i achieve it.<issue_comment>username_1: You will have to pass the `Session` object while creating `LogExportingProcess`
```
class LogExportingProcess extends Thread
{
private HttpSession session;
public LogExportingProcess(HttpSession session) {
this.session = session;
}
public void run()
{
session.setAttribute("CompletionStatus" , completePercent);
}
}
```
and one change in `ExportLogs` class
```
LogExportingProcess export = new LogExportingProcess(request.getSession());
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Integer is not mutable. AtomicInteger would be a nice substitute.
```
@WebServlet("/ExportLogs")
public class ExportLogs extends HttpServlet
{
public void doGet( final HttpServletRequest request , final HttpServletResponse response ) throws ServletException,IOException
{
final AtomicInteger completePercent = new AtomicInteger(10);
request.getSession().setAttribute("CompletionStatus" , completePercent);
final LogExportingProcess export = new LogExportingProcess( completePercent );
export.start();
}
}
class LogExportingProcess extends Thread
{
final AtomicInteger completePercent;
public LogExportingProcess( final AtomicInteger completePercent )
{
this.completePercent = completePercent;
}
public void run()
{
completePercent.set( 80 ); //80% complete, substitute with real code
}
}
```
This is preferrable IMHO over holding a reference to the HttpSession object as suggested by username_1, since the HttpSession can be garbage collected normally upon timeout, and AtomicInteger increases encapsulation by only sharing a percentage instead of the entire session information.
Upvotes: 0
|
2018/03/15
| 507 | 1,941 |
<issue_start>username_0: My model looks like below, where in bookJson is a json object -
```
{
"name" : "somebook",
"author" : "someauthor"
}
public class Book{
private int id;
private JSONObject bookJson;
public int getId(){return this.id;}
public JSONObject getBookJson(){this.bookJson;}
public void setId(int id){this.id = id;}
public void setBookJson(JSONObject json){this.bookJson = json;}
}
```
JSONObject belongs to org.json package
When My RestController returns Book object in a ResponseEntity object , I get the error -
```
"errorDesc": "Type definition error: [simple type, class org.json.JSONObject]; nested exception is com.fasterxml.jackson.databind.exc.InvalidDefinitionException: No serializer found for class org.json.JSONObject and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS)
```
What is the best way to achieve this?
Can we not achieve this without having to have a model class with all fields of bookJson?<issue_comment>username_1: Figured this out.
Added JSONObject as a Map instead and used ObjectMapper to do all the conversions.
```
public class Book{
private int id;
private MapbookJson;
public int getId(){return this.id;}
public MapgetBookJson(){this.bookJson;}
public void setId(int id){this.id = id;}
public void setBookJson(Map json){this.bookJson = json;}
}
ObjectMapper mapper = new ObjectMapper();
try {
map = mapper.readValue(json, new TypeReference>(){});
} catch (IOException e) {
throw new JsonParsingException(e.getMessage(), e);
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You need to return the [ResponseEntity](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/http/ResponseEntity.html) of `List` type.
```
public ResponseEntity> doSomething() {
return new ResponseEntity(bookList);
}
```
Upvotes: 0
|
2018/03/15
| 717 | 2,507 |
<issue_start>username_0: ```
php include('includes/config.php');
if(isset($_POST["submit"])){
$empid=$_POST["empid"];
$pass=$_POST["<PASSWORD>"];
$query=mysqli_query($conn,"SELECT employee_id, fname,lname,empid,password,
status, role FROM employee where empid='$empid' and password='$<PASSWORD>'");
$row=mysqli_fetch_array($query);
if(is_array($row))
{
session_start();
$_SESSION["empid"]=$empid;
$_SESSION["role"]=$row["role"];
$_SESSION["eid"]=$row["empid"];
$_SESSION["status"]=$row['status'];
$_SESSION["employee_id"]=$row['employee_id'];
$_SESSION['uname']=$row['fname']." ".$row['lname'];
if($_SESSION["role"]=='admin' && $_SESSION["status"]0){
$_SESSION['alogin']=$_POST['empid'];
header("Location:admin/home.php");
}
elseif($_SESSION["role"]=='TL' && $_SESSION["status"]>0){
$_SESSION['tlogin']=$_POST['empid'];
header("Location:TL/home.php");
}
else{
$_SESSION['emplogin']=$_POST['empid'];
header("Location:home.php");
}
}
else{
echo "alert('Invalid login details');" // this is not working
header("Location:index.php");
}
}
?>
```
I have created a login form with the employee id and password to login. incase of wrong credentials i want to show the alert message as Invalid login credentials in the code, but its not working. since script is included in between php code. Kindly help me how can i get alert error message.<issue_comment>username_1: Replace `if(is_array($row))` with `if(mysqli_num_rows($row) > 0)`
Upvotes: -1 <issue_comment>username_2: Echoing HTML *and* redirecting won't work. My suggestion would be to pass a parameter, like this:
```
header("Location:index.php?error=login");
```
You can read that in index.php and display an error accordingly:
```
$err = filter_input(INPUT_GET, 'error');
if ($err === "login") echo "alert('Invalid login!');";
```
Upvotes: 2 <issue_comment>username_3: You can do the alert thing on index.php by adding a GET parameter when you redirect to it...
Like:
```
index.php?invalid_login=true
```
This line:
```
header("Location:index.php");
```
Should be:
```
header("Location:index.php?invalid_login=true");
```
And in index.php add the following code inside it.
```
if( isset( $_GET['invalid_login'] ) AND $_GET['invalid_login'] == 'true' ) {
echo "alert('Invalid login details');";
}
```
Upvotes: 1
|
2018/03/15
| 454 | 1,454 |
<issue_start>username_0: I have a pdf link like www.xxx.org/content/a.pdf, and I know that there are many pdf files in www.xxx.org/content/ directory but I don't have the filename list. And When I access www.xxx.org/content/ using browser, it will redirect to www.xxx.org/home.html.
I tried to use wget like "wget -c -r -np -nd --accept=pdf -U NoSuchBrowser/1.0 www.xxx.org/content", but it returns nothing.
So does any know how to download or list all the files in www.xxx.org/content/ directory?<issue_comment>username_1: Replace `if(is_array($row))` with `if(mysqli_num_rows($row) > 0)`
Upvotes: -1 <issue_comment>username_2: Echoing HTML *and* redirecting won't work. My suggestion would be to pass a parameter, like this:
```
header("Location:index.php?error=login");
```
You can read that in index.php and display an error accordingly:
```
$err = filter_input(INPUT_GET, 'error');
if ($err === "login") echo "alert('Invalid login!');";
```
Upvotes: 2 <issue_comment>username_3: You can do the alert thing on index.php by adding a GET parameter when you redirect to it...
Like:
```
index.php?invalid_login=true
```
This line:
```
header("Location:index.php");
```
Should be:
```
header("Location:index.php?invalid_login=true");
```
And in index.php add the following code inside it.
```
if( isset( $_GET['invalid_login'] ) AND $_GET['invalid_login'] == 'true' ) {
echo "alert('Invalid login details');";
}
```
Upvotes: 1
|
2018/03/15
| 539 | 1,603 |
<issue_start>username_0: If i have dataframe with column x.
I want to make a new column x\_new but I want the first row of this new column to be set to a specific number (let say -2).
Then from 2nd row, use the previous row to iterate through the cx function
```
data = {'x':[1,2,3,4,5]}
df=pd.DataFrame(data)
def cx(x):
if df.loc[1,'x_new']==0:
df.loc[1,'x_new']= -2
else:
x_new = -10*x + 2
return x_new
df['x_new']=(cx(df['x']))
```
The final dataframe
[](https://i.stack.imgur.com/wodjZ.png)
I am not sure on how to do this.
Thank you for your help<issue_comment>username_1: Replace `if(is_array($row))` with `if(mysqli_num_rows($row) > 0)`
Upvotes: -1 <issue_comment>username_2: Echoing HTML *and* redirecting won't work. My suggestion would be to pass a parameter, like this:
```
header("Location:index.php?error=login");
```
You can read that in index.php and display an error accordingly:
```
$err = filter_input(INPUT_GET, 'error');
if ($err === "login") echo "alert('Invalid login!');";
```
Upvotes: 2 <issue_comment>username_3: You can do the alert thing on index.php by adding a GET parameter when you redirect to it...
Like:
```
index.php?invalid_login=true
```
This line:
```
header("Location:index.php");
```
Should be:
```
header("Location:index.php?invalid_login=true");
```
And in index.php add the following code inside it.
```
if( isset( $_GET['invalid_login'] ) AND $_GET['invalid_login'] == 'true' ) {
echo "alert('Invalid login details');";
}
```
Upvotes: 1
|
2018/03/15
| 425 | 1,671 |
<issue_start>username_0: How can I distinguish in java graphQL if a parameter was explicitly set to null, or if it was not provided at all?
The use case that I try to achieve is the following: I have a mutation like this
```
updateUser(id:Int!, changes:UserChanges!)
#where UserChanges is defined as:
type UserChanges {
login:String
email:String
#etc
```
}
The idea here is that the user provides only the fields that he wants to change (like react setState for example).
So if email is ommited, I want to leave it unchanged.
But what if he/she wants to explicitly set email to null?
Is there any way figure this out from my resolver method and act accordingly?
(I use graphql-java-tools library)<issue_comment>username_1: I found the answer. In case somebody needs it:
An instance of graphql.schema.DataFetchingEnvironment is available to every resolver method.
This provides methods like getArguments(), hasArgument() etc.
Using those methods, we can find out if an argument was an explicit set to null, or if it was not provided at all.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Looks like deserialization from query/variables is handled by fasterxml Jackson, and that's proper place to deal with the issue, otherwise it becomes too complex: check every field? nested?
So: UserChanges.java should look like this:
```
class UserChanges {
// SHOULD NOT HAVE ALL ARGUMENT CONSTRUCTOR!
Optional login;
Optional email;
... getters & setters
}
```
in this case deserializer will use setters, ONLY FOR PROVIDED FIELDS!
And `{"login":null}` will become:
```
UserChanges.login = Optional.empty
UserChanges.email = null
```
Upvotes: 0
|
2018/03/15
| 1,648 | 4,510 |
<issue_start>username_0: I have setup GitLab runner on winserver 2016.
Everything works fine, except output of runner on gitlab.
Locale of winserver is RU.
I'm trying to build projects with MSBUILD, which outputs russian characters:
`Checking out e5ec41d1 as release-2...
Skipping Git submodules setup
$ echo "начинается билд %PROJECT_NAME%"
"начинается билд PEPSolution"
$ echo "Релизная сборка... "
"Релизная сборка... "
$ "C:\Program Files ^(x86^)\MSBuild\14.0\Bin\amd64\MSBuild.exe" /consoleloggerparameters:ErrorsOnly /maxcpucount /nologo /property:Configuration=Release /verbosity:quiet "%PROJECT_NAME%.sln"
C:\Program Files (x86)\MSBuild\14.0\bin\amd64\Microsoft.Common.CurrentVersion.targets(2398,5): error MSB3091: ������ �� �믮�����, ⠪ ��� �� �����㦥�`
Russian symbols from `yml` file are displayed correctly. But from output of MSBUILD is wrong
So questions are:
1. How to make it to show in correctly?
2. May be I violate some best practices?
Regards<issue_comment>username_1: Found finaly solution!
just add
`- chcp 65001`
into yml file before calling msbuild
it tells change default codepage of cmd to utf8
Upvotes: 1 <issue_comment>username_2: Add before "stages:" command "- CHCP 65001" in "before\_script:"
```
before_script:
- CHCP 65001
stages:
- build
- test
- deploy
...
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: I have same issue (Windows 10 Home for one language - russian, v.1903, build 18362.535).
Result of gitlab runner:
```
1 Running with gitlab-runner 12.6.0 (ac8e767a)
2 on gitlab-unity-runner vzC5L735
3 Using Shell executor...
4 Running on DESKTOP-LOSJ2JN...
5 Fetching changes with git depth set to 50...
6 & : ��� "git" �� ��ᯮ����� ��� ��� ����������, �㭪樨, 䠩�� �業���� ��� �믮��塞�� �ணࠬ��. ������ �ࠢ��쭮�
7 �� ����ᠭ�� �����, � ⠪�� ����稥 � �ࠢ��쭮��� ����, �� 祣� ��������� �������.
8 C:\WINDOWS\TEMP\build_script960183957\script.ps1:163 ����:3
9 + & "git" "config" "-f" "C:\\GitLab-Runner\builds\vzC5L735\0\ga ...
10 + ~~~~~
11 + CategoryInfo : ObjectNotFound: (git:String) [], CommandNotFoundException
12 + FullyQualifiedErrorId : CommandNotFoundException
13
14 cd : �� 㤠���� ����� ���� "C:\\GitLab-Runner\builds\vzC5L735\0\\test\_unity\_ci\_project", ⠪ ��� �� ��
15 ����������.
16 C:\WINDOWS\TEMP\build\_script063808752\script.ps1:159 ����:1
17 + cd "C:\\GitLab-Runner\builds\vzC5L735\0\\test\_unity\_c ...
18 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
19 + CategoryInfo : ObjectNotFound: (C:\\Git...nity\_ci\_project:String) [Set-Location], ItemNotFoundE
20 xception
21 + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand
22
23 ERROR: Job failed: exit status 1
```
I've set language for non unicode programs:
Control panel -> Regional standards -> Additional -> Language for non unicode programs -> Change system language -> change to 'English (USA)' -> reboot.
**scr1**
[](https://i.stack.imgur.com/yORpg.png)
**scr2**
[](https://i.stack.imgur.com/XePuw.png)
After that I've got correct gitlab runner message:
```
1 Running with gitlab-runner 12.6.0 (ac8e767a)
2 on gitlab-unity-runner vzC5L735
3 Using Shell executor...
5 Running on DESKTOP-LOSJ2JN...
7 Fetching changes with git depth set to 50...
8 & : The term 'git' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spe
9 lling of the name, or if a path was included, verify that the path is correct and try again.
10 At C:\WINDOWS\TEMP\build_script179741393\script.ps1:163 char:3
11 + & "git" "config" "-f" "C:\\GitLab-Runner\builds\vzC5L735\0\ga ...
12 + ~~~~~
13 + CategoryInfo : ObjectNotFound: (git:String) [], CommandNotFoundException
14 + FullyQualifiedErrorId : CommandNotFoundException
15
17 cd : Cannot find path 'C:\\GitLab-Runner\builds\vzC5L735\0\\test\_unity\_ci\_project' because it does not
18 exist.
19 At C:\WINDOWS\TEMP\build\_script677283324\script.ps1:159 char:1
20 + cd "C:\\GitLab-Runner\builds\vzC5L735\0\\test\_unity\_c ...
21 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
22 + CategoryInfo : ObjectNotFound: (C:\\Git...nity\_ci\_project:String) [Set-Location], ItemNotFoundE
23 xception
24 + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand
25
27 ERROR: Job failed: exit status 1
```
Upvotes: 0
|
2018/03/15
| 841 | 2,697 |
<issue_start>username_0: How do I extract all the want words in a string and count them in C#?
here is the example
These are the words I want to extract: one, two, three
This is the given string: One times two plus one equals three.
The result should display one two one three and 4
Many thanks in advance<issue_comment>username_1: I dont think you want that what you ask but here is my Code :
```
public static int ExtractWordsOutOfString(this string s, List filter)
{
int ret = 0;
string[] s1 = s.Split(' ');
foreach (string eachWord in s1)
{
foreach (string eachFilter in filter)
{
if (eachWord == eachFilter)
ret++;
}
}
return ret;
}
```
and you can use it like :
```
string k = "one times two plus one equals three";
List localfilter = new List();
localfilter.Add("one");
localfilter.Add("two");
localfilter.Add("three");
localfilter.Add("four");
Console.WriteLine(k.ExtractWordsOutOfString(localfilter));
```
Upvotes: 0 <issue_comment>username_2: Its fairly easy. You can do it many ways, however i have chosen `split` and `Linq`
[String.Split Method](https://msdn.microsoft.com/en-us/library/system.string.split(v=vs.110).aspx)
>
> Returns a string array that contains the substrings in this instance
> that are delimited by elements of a specified string or Unicode
> character array.
>
>
>
[Enumerable.Select Method (IEnumerable, Func)](https://msdn.microsoft.com/en-us/library/bb548891(v=vs.110).aspx)
>
> Projects each element of a sequence into a new form.
>
>
>
```
var myList = new List
{
"one",
"two",
"three"
};
var input = "One times two plus one equals three";
var inputList = input.Split(new []{' '},StringSplitOptions.RemoveEmptyEntries)
.Select(x => x.ToLower());
var result = inputList.Where(x => myList.Contains(x.ToLower()))
.ToList();
Console.WriteLine(string.Join(", ", result));
Console.WriteLine(result.Count());
```
[**See the demo here**](https://dotnetfiddle.net/O0RMEz)
**Updated for mjwills comoment**
>
> Would that match One?
>
>
>
To make sure its truly case sensitive
```
myList.Contains(x.ToLower())
```
**Update 2**
Or as mjwills pointed out again
>
> You could even consider using a case insensitive HashSet to make
> Contains faster - removing the need for the ToLower
>
>
>
```
var set = new HashSet(StringComparer.OrdinalIgnoreCase)
{
"one",
"two",
"three"
};
var input = "One times two plus one equals three";
var inputList = input.Split(new []{' '},StringSplitOptions.RemoveEmptyEntries).Select(x => x.ToLower());
var result = inputList.Where(x => set.Contains(x)).ToList();
Console.WriteLine(string.Join(", ", result));
Console.WriteLine(result.Count());
```
Upvotes: 1
|
2018/03/15
| 783 | 2,834 |
<issue_start>username_0: I have a List containing an object like this:
```
List unitlist = new List();
```
With unit beeing initialized like this:
```
public class unit
{
public string[] records;
}
```
I then use a variable to add into the list:
```
var temp = new unit();
temp.records = csv.GetFieldHeaders; // using lumenworks framework to read a csv
unitlist.Add(temp);
```
When I now override temp with a new item line from the csv, the entry in the list unitlist is also changed:
```
while (csv.ReadNextRecord())
{
for (int i = 0; i < csv.Fieldcount; i++)
{
// Read the csv entry in the temp variable
temp.records[i] = csv[i];
}
// check for specific field, and write to list
if (temp.records[8] == "Ja")
unitlist.Add(temp);
}
```
When I now check unitlist, all the entries are the last read line from the csv, because they all are changed when the temp-variable changes. Why is that the case? How can I separate the List unitlist from the variable temp?<issue_comment>username_1: Because you're using the same bucket to store things in.
If you create `temp` every time, this should fix your problem
```
var header = new unit();
header.records = csv.GetFieldHeaders;
unitlist.Add(header);
...
while (csv.ReadNextRecord())
{
var temp = new unit();
temp.records = new string[header.records.Length]
for (int i = 0; i < csv.Fieldcount; i++)
{
// Read the csv entry in the temp variable
temp.records[i] = csv[i];
}
// check for specific field, and write to list
if (temp.records[8] == "Ja")
unitlist.Add(temp);
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You need to create temp object in each iteration like below. Please try this and check.
```
while (csv.ReadNextRecord())
{
var temp = new unit();
temp.records = csv.GetFieldHeaders;
for (int i = 0; i < csv.Fieldcount; i++)
{
// Read the csv entry in the temp variable
temp.records[i] = csv[i];
}
// check for specific field, and write to list
if (temp.records[8] == "Ja")
unitlist.Add(temp);
}
```
Upvotes: 0 <issue_comment>username_3: When you create the `temp` variable, it is referencing a location in memory where your object data is allocated. When you add it to `unitlist`, a reference is added to the list that points to the same location in memory.
Now, when you change `temp.records[i]`, it is *updated* in that same memory location. So, you end up with a list of items, all pointing to the same object in memory, containing the last `records` in the CSV file.
Simply adding a `temp = new unit();` at the beginning of your `while` loop will cause each iteration to allocation a new object, with a new memory location, and have `temp` reference it.
Upvotes: 0
|
2018/03/15
| 1,019 | 3,691 |
<issue_start>username_0: I have a query which returns 0 rows but executing the same query using pgadmin or dbeaver returns a result set with rows.
**Ive noticed this because i have a postgresql function which should return rows but didnt. After that i started debugging.**
Other queries are not affected.
I tried it using knexjs (`knex.raw()`) and pg (`client.query()`).
Off cause, i checked the connection a dozen times using different queries and reading the connection string.
This is really strange.
The whole point here is, why does this work in dbeaver and not in my code. Is this a drivers thing?
---------------------------------------------------------------------------------------------------
### Queries
```
select id from (
select id, started_at from queue
where finished_at is null and started_at is not null order by id
) d
where date_part('minute',age(now(), started_at)) >= 5
```
I played around a lot and found that the following queries do work.
```
select id from queue
where date_part('minute',age(now(), started_at)) >= 5;
```
and
```
select id from (
select id, started_at from queue
where finished_at is null and started_at is not null order by id
) d;
```
Update
------
### not working
```
const test = await this.knexInstance.raw(`
select id from (
select id, started_at from queue
where finished_at is null and started_at is not null order by id
) d
where date_part('minute',age(now(), started_at)) >= 5
`);
console.log(test.rows); // => []
console.log(test.rows.length); // => 0
```
### working
```
const test = await this.knexInstance.raw(`
select id from queue
where date_part('minute',age(now(), started_at)) >= 5;
`);
console.log(test.rows); // => Array(48083) [Object, Object, Object, Object, Object, Object, Object, Object, …]
console.log(test.rows.length); // => 48083
```<issue_comment>username_1: Because you're using the same bucket to store things in.
If you create `temp` every time, this should fix your problem
```
var header = new unit();
header.records = csv.GetFieldHeaders;
unitlist.Add(header);
...
while (csv.ReadNextRecord())
{
var temp = new unit();
temp.records = new string[header.records.Length]
for (int i = 0; i < csv.Fieldcount; i++)
{
// Read the csv entry in the temp variable
temp.records[i] = csv[i];
}
// check for specific field, and write to list
if (temp.records[8] == "Ja")
unitlist.Add(temp);
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You need to create temp object in each iteration like below. Please try this and check.
```
while (csv.ReadNextRecord())
{
var temp = new unit();
temp.records = csv.GetFieldHeaders;
for (int i = 0; i < csv.Fieldcount; i++)
{
// Read the csv entry in the temp variable
temp.records[i] = csv[i];
}
// check for specific field, and write to list
if (temp.records[8] == "Ja")
unitlist.Add(temp);
}
```
Upvotes: 0 <issue_comment>username_3: When you create the `temp` variable, it is referencing a location in memory where your object data is allocated. When you add it to `unitlist`, a reference is added to the list that points to the same location in memory.
Now, when you change `temp.records[i]`, it is *updated* in that same memory location. So, you end up with a list of items, all pointing to the same object in memory, containing the last `records` in the CSV file.
Simply adding a `temp = new unit();` at the beginning of your `while` loop will cause each iteration to allocation a new object, with a new memory location, and have `temp` reference it.
Upvotes: 0
|
2018/03/15
| 828 | 2,618 |
<issue_start>username_0: I am using kie-api 7.6.0.
When I am trying to get `kieservices.factory.get()` , its returning `null`.
My Java project is a gradle project.
What can be the cause? My Java project is a gradle project.
```
final ReleaseIdImpl releaseId = new ReleaseIdImpl(RULE_PACKAGE, RULE_NAME,
RULE_VERSION);
final KieServices ks = KieServices.Factory.get();
final KieContainer kContainer = ks.newKieContainer(releaseId);
```
I am getting ks as null and thus NullPointerException at ks.newKieContainer(releaseId);
I have added dependencies for the following jars
```
drools-compiler-6.5.0.Final.jar
drools-core-6.5.0.Final.jar
drools-decisiontables-6.5.0.Final.jar
drools-jsr94-6.5.0.Final.jar
drools-persistence-jpa-6.5.0.Final.jar
drools-templates-6.5.0.Final.jar
org.drools.eclipse-6.5.0.Final.jar
kie-maven-plugin-6.5.0.Final.jar
kie-api-6.5.0.Final.jar
kie-ci-6.5.0.Final.jar
kie-internal-6.5.0.Final.jar
```<issue_comment>username_1: I believe it's the same cause as here: [Drools 7.4.1 kieservices.factory.get() returns null](https://stackoverflow.com/questions/47556233/drools-7-4-1-kieservices-factory-get-returns-null)
Here is my answer.
---
We had the same issue when trying to use Drools in our webserver with
embedded Grizzly http server.
We also needed to add the drools-compiler dependency, but that alone does not fix it.
Because there are multiple kie.conf files on the class path from the different dependencies, the uber-jar ends up having just one, and then definitions for classes to load are missing.
Besides these entries from the drools-core kie.conf:
```
org.kie.api.io.KieResources = org.drools.core.io.impl.ResourceFactoryServiceImpl
org.kie.api.marshalling.KieMarshallers = org.drools.core.marshalling.impl.MarshallerProviderImpl
org.kie.api.concurrent.KieExecutors = org.drools.core.concurrent.ExecutorProviderImpl
```
we added these lines from drools-compiler to our uber-jar **kie.conf**:
```
org.kie.api.KieServices = org.drools.compiler.kie.builder.impl.KieServicesImpl
org.kie.internal.builder.KnowledgeBuilderFactoryService = org.drools.compiler.builder.impl.KnowledgeBuilderFactoryServiceImpl
```
Otherwise the KieServices were not loaded and KieServices.Factory.get() returned null.
We are modifying the built jar afterwards using
```
jar uf myjar.jar META-INF/kie.conf
```
to modify the contained kie.conf file. We couldn't find a clean integrated solution with Maven. Any suggestions welcome...
Upvotes: 1 <issue_comment>username_2: You're missing to add the drools-compiler, drools-core dependency in pom.xml
Upvotes: 0
|
2018/03/15
| 265 | 999 |
<issue_start>username_0: I've increased the "Right margin (columns)" in File->Settings->Editor->Code Style from default 100 to 140. Unfortunately the margin is reset after each time I restart Android Studio. I also tried to export and import my settings but this does not prevent the right margin to be reset.
Hopefully someone can tell me how to save the margin.<issue_comment>username_1: If you are selecting the **default** option in Code Style then it is going to reset again to `100` value
Try to select **Project** from Scheme and apply the setting it is working
if You are setting it **default** then it is reset to it default value
[](https://i.stack.imgur.com/LmXAh.png)
Upvotes: 4 [selected_answer]<issue_comment>username_2: Just remove the next row from `gradle.properties`
```
kotlin.code.style=official
```
Dont forget to re-apply codestyle settings in IDE settings or in `/.idea/codeStyles/`
Upvotes: 2
|
2018/03/15
| 879 | 3,743 |
<issue_start>username_0: i have a rental website and when someone wants to make an offer he has 7 min to pay, if he wont pay the offer will delete.
i have a timer on my form to check the time, and when the timer is on 0:00 and the user didn't pay his offer will delete.
MY question is how can i check if user log out? i mean user can exit from the site (by clicking X) and his session will end.
i want to delete his rent offer if user quit from the website.
Thanks for the helpers.<issue_comment>username_1: First do not fulfill offers that are older than your timeout(7mins) I'm assuming that you have OfferCreatedDate timeStamp. Second create a job that will clean all unfulfilled and expired offers. Hope this helps
Upvotes: 0 <issue_comment>username_2: You cannot. Not reliably. The user will not send you a nice message when he does not do something.
You can program your site to send you a signal if something *happens*, but you need to know when something *doesn't* happen. And it can "not happen" in multiple ways, many of them not allowing a signal to be transmitted.
Just imagine your user's train goes into a tunnel or he kills his browser, his computer crashes or cell phone loses battery power. All events that happen daily and all of them will *not* notify you nicely. They cannot.
So what you need to do is figure out a way to delete all obsolete orders. Either on a timer in an independent service, or maybe before a user places any order. But you need do that in a place independent of the user playing nice with your frontend app.
---
One way of handling this would be to save the date and time of creation with every offer you give out. Every time you check available resources and create a new offer for a user, delete all offers that are older than your limit before giving out new offers, thereby freeing up the blocked resources.
Upvotes: 0 <issue_comment>username_3: For this scenario, I don't think its a good idea to rely on browser events, such as `onunload` & `onbeforeunload`. User may have opened more than one tabs. So closing one tab will remove the offer. Furthermore, if the user click back button these events will be fired. So don't rely on browser events for this.
(But, if the user clicked on LogOut then you have enough information to delete the offer.)
Perhaps you can use following approach to handle your original problem:
1. When user create a new offer store these details in the database with two extra columns: `OfferCreatedUtcDateTime` and `PaymentCompleted`(which should be `false`).
2. If the user completed payment successfully, you can set `PaymentCompleted` to `true`.
3. Then you can use one of the following two options:
Option 1:
Create a [windows service](https://learn.microsoft.com/en-us/dotnet/framework/windows-services/) which will check above database columns. If the `PaymentCompleted == false` and `OfferCreatedUtcDateTime + offer valid period > CurrentUtcDateTime` then you can delete this offer.
Option 2:
As mentioned by @username_2 in the answer, every time user search for a resource you can ignore or delete offers which satisfies the condition mentioned in Option 1.
Hope this helps.
Upvotes: 1 <issue_comment>username_4: What about not focusing on how to set the timer to 0 when user session end but check other users timer's when another user create one ?
Then you can still have the checking process for the connected user, when it goes to 0 it stopped but for the case the user close the windows or leave, when another user create a reservation you also and firstly check if there's timer still alive older than 7 minutes and you release them so the user currently doing a reservation can do this one that has just been set as available ?
Upvotes: 0
|
2018/03/15
| 254 | 973 |
<issue_start>username_0: Anybody knows how to reverse layout in `GridLayoutManager` and arrange items like in Instagram. As you know `setStackFromEnd` doesn't work, and `setReverseLayout` reverses list but not properly and scrollbar goes down automatically. I want to have reversed layout with scroll bars up.<issue_comment>username_1: To keep your *Scrollbars up* scroll to Top (in reverse arrangement).
```
recyclerView.smoothScrollToPosition(0);
```
**OR instead** of `reversing GridLayoutManager` you can try the below code to keep the scrollBars up:
```
List reverseList = Lists.reverse(yourList);
```
pass it to the adapter:
```
YourAdapter adapter = new YourAdapter(reverseList);
```
Upvotes: 1 <issue_comment>username_2: Use the below code.
```
Collections.reverse(yourList); // add your ArrayList in "yourList"
```
Don't do anything with GridLayoutManager because GridLayoutManager does not support stackFromEnd(true).
I hope it is helpful.
Upvotes: 0
|
2018/03/15
| 274 | 1,002 |
<issue_start>username_0: How to get Object ID for dynamically created Label?? As I have to translate Label text for other languages.
[](https://i.stack.imgur.com/PRu4d.png)<issue_comment>username_1: Object ID is created by the interface builder and interface builder use it internally, there is no such property of `UIView` which gives you object id. Hence you can't access it via code (dynamically) regardless whether view is created on storyboard or at run time.
Upvotes: 2 <issue_comment>username_2: We can translate text from app using String file. In string file we can store strings we want to translate.
Ex:
Localizable.string(Hindi)
```
"translate" = "अनुवाद करना";
```
Localizable.string(English)
```
"translate" = "translate"
```
As we run app, app automatically store selected language translations from file.
`Ex.`
`[[Localisator sharedInstance] setLanguage:[language];`
Upvotes: 2 [selected_answer]
|
2018/03/15
| 449 | 1,495 |
<issue_start>username_0: I have six text boxes and I want to count the number of filled boxes
```
{{counter}}
counter: number = 0;
counterfunc(tb){
// need help here
if (tb.value != '') {
this.counter++;
}
}
```
I found this plunker [plunkr](https://plnkr.co/edit/H7IscAC60rSwrkVSSVnm?p=preview) but this is for checkboxes. how can I count the number of the filled text boxes? and a number of counts should decrease one if user empty the box. Thank you<issue_comment>username_1: I don't see the point of declaring seleveral variables for a component (well, input) that behaves exactly the same in any case. You should declare a list of inputs, not a variable for every input.
Use the children decorator for that
```
{{filledCount}}
```
In your TS
```
filledCount: number = 0;
@ViewChildren('textboxes') textboxes: QueryList;
input() { this.filledCount = this.textboxes.filter(t => t.nativeElement.value).length; }
```
Here is a **[working stackblitz](https://stackblitz.com/edit/angular-gq8a6i?file=app%2Fapp.component.ts)**.
Upvotes: 2 [selected_answer]<issue_comment>username_2: try this and just take this as a key
```
counter: number = 0;
counterfunc(tb){
if (tb.value != '' && tb.value.length == 1) {
this.counter++;
}
if(tb.value == '')
{
this.counter--;
}
}
```
Upvotes: 0 <issue_comment>username_3: Create a form and add validation to the form. On every change count the valid or invalid fields
Upvotes: 0
|
2018/03/15
| 627 | 2,139 |
<issue_start>username_0: I want to fix some text of string in the center. How can I do that?
For example, take a look on this photo
[](https://i.stack.imgur.com/EE5R1.jpg)
I have a string like that. Now I want some words (Heading) of my string to be shown in a `TextView` at the center horizontally but others word will remain as usual. How can I do that?
I have tried the following code but it doesn't work for various dimension's devices.
```
String string= " SYRIA
\n\nSyria oh Syria why do you bleed?
\nBrother fights brother without thought or need
\nRuled by a tyrant for so many years
\nAnd now the split blood is washed away by tears\n"
```<issue_comment>username_1: You can use `Html.fromHtml`.Using this you only need single `textview`. Just do this, it should work perfectly
```
your_textview.setText(Html.fromHtml("**SYRIA**\n" +
" Syria oh Syria why do you bleed?
\n" +
" Brother fights brother without thought or need
\n" +
" Ruled by a tyrant for so many years
\n" +
" And now the split blood is washed away by tears
"));
```
[](https://i.stack.imgur.com/Xikku.png)
Upvotes: 0 <issue_comment>username_2: ```
```
Upvotes: 0 <issue_comment>username_3: You can wither use 2 different Textviews or If there is single textview, you can use HTML or Spannable String for that. Spannable can be used as follows:
```
TextView resultView = new TextView(this);
final String SyriaText = "Syria";
final String OtherText = "Other Text";
final String result = SyriaText + " " + OtherText;
final SpannableString finalResult = new SpannableString(result);
finalResult.setSpan(new AlignmentSpan.Standard(Alignment.ALIGN_CENTER), 0, SyriaText.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE);
resultView.setText(finalResult);
```
Also you can style it with Spannable. You can learn more from : <https://developer.android.com/reference/android/text/Spannable.html>
Upvotes: 2 [selected_answer]
|
2018/03/15
| 745 | 2,476 |
<issue_start>username_0: I have a `XIB` which is of this size: (width: 300, height: 100) while designing. It has 3 labels which will be expanding based on its content. I am simply loading that `XIB` and adding it to the `self.view`. Is there a way that I can easily increase the `XIB` view height as `UILabel` increases?
I have successfully able to increase my `UILabel`s as per the content.
I am using autolayout for `UILabel`s.
Constraints & settings:
XIB View:
[](https://i.stack.imgur.com/WX42X.png)
Top Label:
[](https://i.stack.imgur.com/KnBaq.png)
Center Label:
[](https://i.stack.imgur.com/FIUa0.png)
Bottom Label:
[](https://i.stack.imgur.com/5myLM.png)<issue_comment>username_1: * Select the bottom most label or the one you think will define the height of your super view.
* Add bottom constraint of that label with super view.
* If you see any `Missing Constraints` error in xib file, just change the bottom constraint relation to `Greater Than or Equal` from `Equal`.
You are good to go.
Upvotes: 1 <issue_comment>username_2: I found some tutorials
1. [Self-sizing table view cell](https://www.raywenderlich.com/129059/self-sizing-table-view-cells)
2. [Autolayout, dynamically resize UILabel](http://jayeshkawli.ghost.io/using-autolayout-to-dynamically-resize-uilabel/)
My summary
----------
A view's layout needs following constraints to decide his layout.
1. **Size**
width, and height
2. **Position**
x-axis, and y-axis position
And `UILabel`'s [intrinsicSize](https://developer.apple.com/library/content/documentation/UserExperience/Conceptual/AutolayoutPG/ViewswithIntrinsicContentSize.html) can decide his `Height` property. So, autolayout needs assign `Position`, and `Width` explicitly.
After assign `text` to `UILabel`, call his `invalidateIntrinsicSize` to re-decide his `Height` property based on his content. And then, if his parent view (ex. `UITableViewCell`, or `UIView`) needs to decide his height based on his child `UILabel`'s height, then his parent's height will be dynamic based on his child `UILabel`.
Maybe his parent `UIView` needs to call `setNeedsLayout`, or `setNeedsUpdateConstraints` after calling `invalidateIntrinsicSize`.
Upvotes: 0
|
2018/03/15
| 541 | 2,000 |
<issue_start>username_0: According to the github link below i should have to generate eclipse metadata. (i'm currently using eclipse) but i dont know what does this mean and how to do it. it just says ./gradlew eclipse and i dont know how can i import it . can anyone please share step by step on detail how to generate it (what button should i click in eclipse or what software should i run ) ?
here is the link :
<https://github.com/joshlong/spring-integration-mqtt><issue_comment>username_1: * Select the bottom most label or the one you think will define the height of your super view.
* Add bottom constraint of that label with super view.
* If you see any `Missing Constraints` error in xib file, just change the bottom constraint relation to `Greater Than or Equal` from `Equal`.
You are good to go.
Upvotes: 1 <issue_comment>username_2: I found some tutorials
1. [Self-sizing table view cell](https://www.raywenderlich.com/129059/self-sizing-table-view-cells)
2. [Autolayout, dynamically resize UILabel](http://jayeshkawli.ghost.io/using-autolayout-to-dynamically-resize-uilabel/)
My summary
----------
A view's layout needs following constraints to decide his layout.
1. **Size**
width, and height
2. **Position**
x-axis, and y-axis position
And `UILabel`'s [intrinsicSize](https://developer.apple.com/library/content/documentation/UserExperience/Conceptual/AutolayoutPG/ViewswithIntrinsicContentSize.html) can decide his `Height` property. So, autolayout needs assign `Position`, and `Width` explicitly.
After assign `text` to `UILabel`, call his `invalidateIntrinsicSize` to re-decide his `Height` property based on his content. And then, if his parent view (ex. `UITableViewCell`, or `UIView`) needs to decide his height based on his child `UILabel`'s height, then his parent's height will be dynamic based on his child `UILabel`.
Maybe his parent `UIView` needs to call `setNeedsLayout`, or `setNeedsUpdateConstraints` after calling `invalidateIntrinsicSize`.
Upvotes: 0
|
2018/03/15
| 1,059 | 3,767 |
<issue_start>username_0: I have made four check boxes. when I check them true it should save in `ArrayList`. Can I save them to the `array list`? and also when I deselect them they should be `removed` from the `list` or `ArrayList`.i have implemented the `checkboxes` and also one select all button to select all `checkboxes` and one `deselect` button on deselect all the values of `checkboxes` should be removed from it.
---
>
> **My code is**
>
>
>
```
public class SearchFragment extends BaseFragment {
private static final String TAG = null;
String[] name={ "Chicago", "Los Angeles", "Japan", "Europe"};
Object arr;
ArrayList arrayList = new ArrayList<>();
TextView tv2
CheckBox abc,abc1,a,b,c;
public static SearchFragment newInstance () {
return new SearchFragment();
}
@Override
public View onCreateView (LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View view = inflater.inflate(R.layout.fragment\_search, container, false);
tv2=(TextView) view.findViewById(R.id.tv2);
abc=(CheckBox) view.findViewById(R.id.abc);
a=(CheckBox) view.findViewById(R.id.a);
b=(CheckBox) view.findViewById(R.id.b);
c=(CheckBox) view.findViewById(R.id.c);
return view;
}
@Override
public void onViewCreated (View view, @Nullable Bundle savedInstanceState) {
super.onViewCreated(view, savedInstanceState);
ButterKnife.bind(this, view);
tv2.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
String value=tv2.getText().toString().trim();
if (value.equals("Select all")){
abc.setChecked(true);
a.setChecked(true);
b.setChecked(true);
c.setChecked(true);
tv2.setText("UnSelect all");
arrayList.addAll(Arrays.asList(name));
Log.e("SELECTED","SELECTED "+arrayList.size());
}
else {
abc.setChecked(false);
b.setChecked(false);
a.setChecked(false);
c.setChecked(false);
tv2.setText("Select all");
if (arrayList.size() != 0) {
arrayList.clear();
}
Log.e("NOT\_SELECTED","NOT\_SELECTED");
}
}
});
abc.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() {
@Override
public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
if (isChecked){
arrayList.addAll(Arrays.asList("Chicago"));
}
}
});
```<issue_comment>username_1: * Select the bottom most label or the one you think will define the height of your super view.
* Add bottom constraint of that label with super view.
* If you see any `Missing Constraints` error in xib file, just change the bottom constraint relation to `Greater Than or Equal` from `Equal`.
You are good to go.
Upvotes: 1 <issue_comment>username_2: I found some tutorials
1. [Self-sizing table view cell](https://www.raywenderlich.com/129059/self-sizing-table-view-cells)
2. [Autolayout, dynamically resize UILabel](http://jayeshkawli.ghost.io/using-autolayout-to-dynamically-resize-uilabel/)
My summary
----------
A view's layout needs following constraints to decide his layout.
1. **Size**
width, and height
2. **Position**
x-axis, and y-axis position
And `UILabel`'s [intrinsicSize](https://developer.apple.com/library/content/documentation/UserExperience/Conceptual/AutolayoutPG/ViewswithIntrinsicContentSize.html) can decide his `Height` property. So, autolayout needs assign `Position`, and `Width` explicitly.
After assign `text` to `UILabel`, call his `invalidateIntrinsicSize` to re-decide his `Height` property based on his content. And then, if his parent view (ex. `UITableViewCell`, or `UIView`) needs to decide his height based on his child `UILabel`'s height, then his parent's height will be dynamic based on his child `UILabel`.
Maybe his parent `UIView` needs to call `setNeedsLayout`, or `setNeedsUpdateConstraints` after calling `invalidateIntrinsicSize`.
Upvotes: 0
|
2018/03/15
| 830 | 3,039 |
<issue_start>username_0: Consider the following buildscript where
* `addToMyConfig` adds a dependency to a [Configuration](https://docs.gradle.org/current/dsl/org.gradle.api.artifacts.Configuration.html) named `myConfig`
* `useMyConfig` consumes the `myConfig` [Configuration](https://docs.gradle.org/current/dsl/org.gradle.api.artifacts.Configuration.html) and forces it to [resolve()](https://docs.gradle.org/current/javadoc/org/gradle/api/artifacts/Configuration.html#resolve--)
```grovy
configurations {
myConfig
}
task addToMyConfig {
doLast {
println "Doing some work"
dependencies {
myConfig 'log4j:log4j:1.2.17'
}
}
}
task useMyConfig {
doLast {
println "myConfig = $configurations.myConfig.files"
}
}
```
Question: Is there a way to fire `addToMyConfig` every time `configurations.myConfig` is resolved without adding a task dependency where `useMyConfig` depends on `addToMyConfig`?
I would like to say:
```groovy
configurations.myConfig.builtBy addToMyConfig
```
\*\* I do NOT want to say \*\*
```groovy
useMyConfig.dependsOn addToMyConfig
```
I want to avoid `useMyConfig.dependsOn addToMyConfig` because there may be many tasks which consume `configurations.myConfig`
Note: There is a [ConfigurableFileCollection.builtBy(Object... tasks)](https://docs.gradle.org/current/javadoc/org/gradle/api/file/ConfigurableFileCollection.html#builtBy-java.lang.Object...-) method which would solve my problem, if only it existed on the [Configuration](https://docs.gradle.org/current/javadoc/org/gradle/api/artifacts/Configuration.html) interface (configuration extends [FileCollection](https://docs.gradle.org/current/javadoc/org/gradle/api/file/FileCollection.html))<issue_comment>username_1: Do you really need a `task` to populate the `configuration` with dependencies?
[`Configuration#withDependencies`](https://docs.gradle.org/4.6/javadoc/org/gradle/api/artifacts/Configuration.html#withDependencies-org.gradle.api.Action-) can be used to add dependencies during resolution. Like this:
```
configurations {
myConfig
}
configurations.myConfig.withDependencies {deps ->
println "Resolving dependencies"
dependencies {
myConfig "log4j:log4j:1.2.17"
}
}
task useMyConfig {
doLast {
println "myConfig = $configurations.myConfig.files"
}
}
```
Upvotes: 2 <issue_comment>username_2: Not possible out-of-the-box. A task runs at most once.
When you call configurations.myConfig, you are calling a dynamic property added to the [ConfigurationContainer](https://docs.gradle.org/current/javadoc/org/gradle/api/artifacts/ConfigurationContainer.html)
Using groovy, it should be possible to override the configurationContainers' behavior via the metaClass. You would be invoking a function here, not a task.
Upvotes: 1 <issue_comment>username_3: this is what you want i think:
```
def files = project.files(your,produced,files)
files.builtBy addToMyConfig
myConfig.dependencies.add(project.dependencies.create(files))
```
Upvotes: 1
|
2018/03/15
| 571 | 1,850 |
<issue_start>username_0: I'm getting a syntax error when `if` is used before the `for` loop without an `else`, but no such error when `else` is present.
Here is my code:
```
data=[[45, 12],[55,21],[19, -2],[104, 20]]
retData= ['Close' if i>54 and j>7 for [i,j] in data]
# getting a syntax error here :(
return retData
```
The code below works, which has `if` and `else` prior to the `for` loop.
```
data=[[45, 12],[55,21],[19, -2],[104, 20]]
retData= ['Close' if i>54 and j>7 else 'Open' for [i,j] in data]
# No Syntax error here!!
return retData
```<issue_comment>username_1: Do you really need a `task` to populate the `configuration` with dependencies?
[`Configuration#withDependencies`](https://docs.gradle.org/4.6/javadoc/org/gradle/api/artifacts/Configuration.html#withDependencies-org.gradle.api.Action-) can be used to add dependencies during resolution. Like this:
```
configurations {
myConfig
}
configurations.myConfig.withDependencies {deps ->
println "Resolving dependencies"
dependencies {
myConfig "log4j:log4j:1.2.17"
}
}
task useMyConfig {
doLast {
println "myConfig = $configurations.myConfig.files"
}
}
```
Upvotes: 2 <issue_comment>username_2: Not possible out-of-the-box. A task runs at most once.
When you call configurations.myConfig, you are calling a dynamic property added to the [ConfigurationContainer](https://docs.gradle.org/current/javadoc/org/gradle/api/artifacts/ConfigurationContainer.html)
Using groovy, it should be possible to override the configurationContainers' behavior via the metaClass. You would be invoking a function here, not a task.
Upvotes: 1 <issue_comment>username_3: this is what you want i think:
```
def files = project.files(your,produced,files)
files.builtBy addToMyConfig
myConfig.dependencies.add(project.dependencies.create(files))
```
Upvotes: 1
|
2018/03/15
| 1,085 | 4,327 |
<issue_start>username_0: In my Angular4 Application, I call my Webservice which returns a response with ByteArrayContent. When I check this data in the network payload, it looks very much like a valid PDF-file. The html-string, from which the following data is generated, is valid for sure, because an online pdf converter (third party) is able to convert my html-string properly.
Generated data from the WebApi to prove it is a PDF-file
```
%PDF-1.5
4 0 obj
<>>>/Group
...
<>stream
... ... ...
startxref
23589
%%EOF
```
1. When I copy/paste this whole text into a txt-file and change the ending to .pdf, the file opens, but there is nothing displayed as it should
2. When I use the JavaScript file-saver, the file gets downloaded. But the header and footer of the pdf-file provided is full with nul-values and it can't even be opened anymore.
This is code for downloading the file
```
` let myHeaders = new Headers({
'Content-Type': 'application/json',
'Cache-Control': 'no-cache',
'Accept': 'application/pdf',
'Access-Control-Allow-Origin': '*' });
let options = new RequestOptions({ headers: myHeaders });
return this.http.post(this.URL, {'htmlString':html}, options)
.map((response: Response) => {
let fileBlob:ArrayBuffer = response.arrayBuffer();
let blob = new Blob([fileBlob], {type: 'application/pdf'});
let filename = "testPdf.txt";
FileSaver.saveAs(blob, filename);
return response;
});
}`
```
In the tutorial, they use `response.blob()` not `response.arrayBuffer()`, but when I use `.blob()`, the exception
>
> Error: The request body isn't either a blob or an array buffer
>
>
>
Is there any way in Angular2+, TypeScript or JavaScript, to convert the PDF-data the right way?
This is my C# code, this should usually be fine. The HtmlToPdfServiceClient-Class is generated by a WSDL-file, so this part is definitely right
```
HtmlToPdfServiceClient client = new HtmlToPdfServiceClient();
HtmlToPdfConversionRequest reqest = new HtmlToPdfConversionRequest();
reqest.Format = ResponseFormats.BASE64;
reqest.InputText = Convert.ToBase64String(System.Text.Encoding.Unicode.GetBytes((string)json["htmlString"]));
HtmlToPdfConversionResponse response = client.Conversion(reqest);
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
result.Content = new ByteArrayContent(Convert.FromBase64String(response.ResultBase64));
result.Content.Headers.ContentType =
new MediaTypeHeaderValue("application/pdf");
client.Close();
return result;
```<issue_comment>username_1: Are you getting HTML and want to convert it to PDF? If so, try some other api for converting HTML to PDF. I recommend to have a look on "ByteScout PDF SDK". It has very simple and straight forward way to do so. you can see the code snippet.
```
// Create converter instance
using (HtmlToPdfConverter converter = new HtmlToPdfConverter())
{
// Perform conversion
converter.ConvertHtmlToPdf("sample.html", "result.pdf");
}
// Open result document in default PDF viewer
Process.Start("result.pdf");
```
Upvotes: 0 <issue_comment>username_2: I implemented this a long time ago in our Angular Application and it works perfectly for me.
You will need to have the following dependencies in your component:
`import * as FileSaver from 'file-saver';
import { Http, RequestOptions, Headers, ResponseContentType } from '@angular/http';`
And the constructor of your component:
`constructor(private http: Http) {}`
All the following code can be wrapped in a method and can be triggered with a button click:
1. Setup url with authorization header
`const url = 'your/url/to/the/pdf';
const header = new Headers();
header.append('Authorization', 'your-authorization-token');`
2. Create Request Options with the header
`const requestOptions = new RequestOptions({
headers: header
});`
3. Add ResponseContentType to your Request Options
`requestOptions.responseType = ResponseContentType.Blob;`
4. Getting your data from your API and save it with filesaver
`this.http.get(url, requestOptions).subscribe(response => FileSaver.saveas(response.blob(), 'test.pdf'));`
Upvotes: 2 [selected_answer]
|
2018/03/15
| 390 | 1,465 |
<issue_start>username_0: if i add a tap-event-listener to an instance of a polymer component with some elements in its local dom, the target (and source) of the event can be a element of the local dom.
What is the best way to get always a reference to the host element and not any of its children in the local dom?
Example: component with local dom
```
Foo
![]()
```
Use component and add a listener
```
...
\_onTap: function(e) {
// here i need a reference to the element
// that listens to the tap (host / x-foo)
// but if i tap on the img, the target is a
// reference to the img element
}
```
Thank you
Greets,
Meisenmann<issue_comment>username_1: You can try to use
```
e.currentTarget
```
instead of
```
e.target
```
as described [here](https://developer.mozilla.org/en-US/docs/Web/API/Event/currentTarget)
You can see in the description that it "*Identifies the current target for the event, as the event traverses the DOM. It always refers to the **element to which the event handler has been attached**, as opposed to event.target which identifies the **element on which the event occurred**.*"
Upvotes: 2 <issue_comment>username_2: As suggested by Mishu in his comments about e.currentTarget, however there is another important point to note is that if you wish to know how many places your events has traversed from, check `composedPath()` this method keeps track of all the places your event has traversed through !
Upvotes: 1
|
2018/03/15
| 275 | 1,123 |
<issue_start>username_0: I never got a chance to work on Impala. I have just started reading about Impala. But i have one basic question which i am not clear about Impala. Impala has its own demons so it also has its own execution engine or it works on MapR or other execution engine.
Thanks in advance<issue_comment>username_1: You can try to use
```
e.currentTarget
```
instead of
```
e.target
```
as described [here](https://developer.mozilla.org/en-US/docs/Web/API/Event/currentTarget)
You can see in the description that it "*Identifies the current target for the event, as the event traverses the DOM. It always refers to the **element to which the event handler has been attached**, as opposed to event.target which identifies the **element on which the event occurred**.*"
Upvotes: 2 <issue_comment>username_2: As suggested by Mishu in his comments about e.currentTarget, however there is another important point to note is that if you wish to know how many places your events has traversed from, check `composedPath()` this method keeps track of all the places your event has traversed through !
Upvotes: 1
|
2018/03/15
| 466 | 1,625 |
<issue_start>username_0: I've got a table like this
```
accountid (int)
emailaddress (varchar)
email(bit)
postal(bit)
sms(bit)
telephone(bit
```
accountid is unique, but emailaddress could have duplicates,
```
accountid emailaddress email postal sms telephone
62626 <EMAIL> 0 1 0 1
76364 <EMAIL> 0 0 0 1
37374 <EMAIL> NULL NULL NULL NULL
```
I want to create a query that groups by the emailaddress, but which chooses the record where email is not null, if there is no not null record then any can be chosen.
In this example I want to end up with
```
accountid emailaddress email postal sms telephone
62626 <EMAIL> 0 1 0 1
76364 <EMAIL> 0 0 0 1
```
Is that possible?
Thanks<issue_comment>username_1: You can try to use
```
e.currentTarget
```
instead of
```
e.target
```
as described [here](https://developer.mozilla.org/en-US/docs/Web/API/Event/currentTarget)
You can see in the description that it "*Identifies the current target for the event, as the event traverses the DOM. It always refers to the **element to which the event handler has been attached**, as opposed to event.target which identifies the **element on which the event occurred**.*"
Upvotes: 2 <issue_comment>username_2: As suggested by Mishu in his comments about e.currentTarget, however there is another important point to note is that if you wish to know how many places your events has traversed from, check `composedPath()` this method keeps track of all the places your event has traversed through !
Upvotes: 1
|