date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/15
| 2,111 | 6,500 |
<issue_start>username_0: Trying to set up CI for an Angular project with Jenkins.
Trying to get Jenkins to generate a build using the following (after cloning down the repo into the workspace)
```
ng build --prod && ng build --prod --app 1 --output-hashing=false && cpy ./server.js ./dist
```
Jenkins however gives me the following error
>
> wrapper script does not seem to be touching the log file in
> /var/lib/jenkins/workspace/Angular CI@tmp/durable-f993c6f2
> (JENKINS-48300: if on a laggy filesystem, consider
> -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT\_CHECK\_INTERVAL=300)
>
>
>
I do not have much experience with Jenkins or setting up servers for CI in general and a Google of the message hasn't provided much clarity.
If anyone could point me in the right direction that would be great.
(can post more code if needed)
Thanks!<issue_comment>username_1: I have skimmed through all the documentation and unfortunately there seems to be no way to do this as of now. The only possible workaround is
[converting a struct to a json when querying athena](https://stackoverflow.com/questions/49081896/converting-a-struct-to-a-json-when-querying-athena)
```
SELECT
my_field,
my_field.a,
my_field.b,
my_field.c.d,
my_field.c.e
FROM
my_table
```
Or I would convert the data to json using post processing. Below script shows how
```
#!/usr/bin/env python
import io
import re
pattern1 = re.compile(r'(?<={)([a-z]+)=', re.I)
pattern2 = re.compile(r':([a-z][^,{}. [\]]+)', re.I)
pattern3 = re.compile(r'\\"', re.I)
with io.open("test.csv") as f:
headers = list(map(lambda f: f.strip(), f.readline().split(",")))
for line in f.readlines():
orig_line = line
data = []
for i, l in enumerate(line.split('","')):
data.append(headers[i] + ":" + re.sub('^"|"$', "", l))
line = "{" + ','.join(data) + "}"
line = pattern1.sub(r'"\1":', line)
line = pattern2.sub(r':"\1"', line)
print(line)
```
The output on your input data is
```
{"timestamp":1.520640777666096E9,"stats":[{"time":15.0, "mean":45.23, "var":0.31}, {"time":19.0, "mean":17.315, "var":2.612}],"dets":[{"coords":[2.4, 1.7, 0.3], "header":{"frame":1, "seq":1, "name":"hello"}}],"pos":{"x":5.0, "y":1.4, "theta":0.04}
}
```
Which is a valid JSON
[](https://i.stack.imgur.com/oMDQ9.png)
Upvotes: 4 [selected_answer]<issue_comment>username_2: The python code from @tarun almost got me there, but I had to modify it in several ways due to my data. In particular, I have:
* json structures saved in Athena as strings
* Strings that contain multiple words, and therefore need to be in between double quotes. Some of them contain "[]" and "{}" symbols.
Here is the code that worked for me, hopefully will be useful for others:
```
#!/usr/bin/env python
import io
import re, sys
pattern1 = re.compile(r'(?<={)([a-z]+)=', re.I)
pattern2 = re.compile(r':([a-z][^,{}. [\]]+)', re.I)
pattern3 = re.compile(r'\\"', re.I)
with io.open(sys.argv[1]) as f:
headers = list(map(lambda f: f.strip(), f.readline().split(",")))
print(headers)
for line in f.readlines():
orig_line = line
#save the double quote cases, which mean there is a string with quotes inside
line = re.sub('""', "#", orig_line)
data = []
for i, l in enumerate(line.split('","')):
item = re.sub('^"|"$', "", l.rstrip())
if (item[0] == "{" and item[-1] == "}") or (item[0] == "[" and item[-1] == "]"):
data.append(headers[i] + ":" + item)
else: #we have a string
data.append(headers[i] + ": \"" + item + "\"")
line = "{" + ','.join(data) + "}"
line = pattern1.sub(r'"\1":', line)
line = pattern2.sub(r':"\1"', line)
#restate the double quotes to single ones, once inside the json
line = re.sub("#", '"', line)
print(line)
```
Upvotes: 1 <issue_comment>username_3: This method is not by modifying the Query.
Its by Post Processing For Javascript/Nodejs we can use the npm package *athena-struct-parser*.
**Detailed Answer with Example**
<https://stackoverflow.com/a/67899845/6662952>
Reference - <https://www.npmjs.com/package/athena-struct-parser>
Upvotes: 0 <issue_comment>username_4: I used a simple approach to get around the struct -> json Athena limitation. I created a second table where the json columns were saved as raw strings. Using presto json and array functions I was able to query the data and return the valid json string to my program:
```
--Array transform functions too
select
json_extract_scalar(dd, '$.timestamp') as timestamp,
transform(cast(json_extract(json_parse(dd), '$.stats') as ARRAY), x -> json\_extract\_scalar(x, '$.time')) as arr\_stats\_time,
transform(cast(json\_extract(json\_parse(dd), '$.stats') as ARRAY), x -> json\_extract\_scalar(x, '$.mean')) as arr\_stats\_mean,
transform(cast(json\_extract(json\_parse(dd), '$.stats') as ARRAY), x -> json\_extract\_scalar(x, '$.var')) as arr\_stats\_var
from
(select '{"timestamp":1520640777.666096,"stats":[{"time":15,"mean":45.23,"var":0.31},{"time":19,"mean":17.315,"var":2.612}],"dets":[{"coords":[2.4,1.7,0.3], "header":{"frame":1,"seq":1,"name":"hello"}}],"pos": {"x":5,"y":1.4,"theta":0.04}}' as dd);
```
I know the query will take longer to execute but there are ways to optimize.
Upvotes: 0 <issue_comment>username_5: I worked around this by creating a second table using the same S3 location, but changed the field's data type to string. The resulting CSV then had the string that Athena pulled from the object in the JSON file and I was able to parse the result.
Upvotes: 0 <issue_comment>username_6: I also had to adjust the @tarun code, because I had more complex data and nested structures. Here is the solution I've got, I hope it helps:
```
import re
import json
import numpy as np
pattern1 = re.compile(r'(?<=[{,\[])\s*([^{}\[\],"=]+)=')
pattern2 = re.compile(r':([^{}\[\],"]+|()(?![{\[]))')
pattern3 = re.compile(r'"null"')
def convert_metadata_to_json(value):
if type(value) is str:
value = pattern1.sub('"\\1":', value)
value = pattern2.sub(': "\\1"', value)
value = pattern3.sub('null', value)
elif np.isnan(value):
return None
return json.loads(value)
df = pd.read_csv('test.csv')
df['metadata_json'] = df.metadata.apply(convert_metadata_to_json)
```
Upvotes: 0
|
2018/03/15
| 1,408 | 4,633 |
<issue_start>username_0: I managed to finish a script to automate repetitive tasks. My first one on Python!So I am now in the process of automating the part where I have to retrieve the data and format it for the script to use.
Here are the relevant parts my code:
```
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
import csv
ie = 'C:\\Users\\dd\\Desktop\\IEDriverServer32.exe'
print(ie)
Iebrowswer = webdriver.Ie(ie)
Iebrowswer.get('https://ww3.example.com/')
Iebrowswer.find_element_by_class_name('gridrowselect').click()
print(len(Iebrowswer.find_elements_by_class_name('gridrow')))
Gridcells = Iebrowswer.find_elements_by_class_name('gridcell')
Gridinfo = [i.text for i in Gridcells]
print(Gridinfo)
csvfile = 'C:\\Users\\dd\\Desktop\\CSV1.csv'
with open(csvfile, "w") as output:
writer = csv.writer(output, lineterminator='\n')
for val in Gridinfo:
writer.writerow(['val'])
```
I managed to get the information that I wanted. All of it. Right now, my biggest issue is what is happening to the data when I make my CSV. It's coming out all wrong. This is what I get when I print into the shell(a small example):
```
['5555', '1', 'Verified', '', '6666', '2', 'Verified', '']
```
My excel/csv file is being displayed vertically like this:
```
Columnl
[5555]
[1]
[Verified]
[ ]
[6666]
[2]
[Verified]
[ ]
```
What I want is for my data to displayed **horizontally** breaking after the empty space like this:
```
Column1 Column2 Column3 Column4
5555 1 Verified
6666 2 Verified
```
1. **How do I achieve this?**
I've looked over the documentation and a bunch of other questions on here, but I'm not closer to understanding the csv library and its arguments at all. It always seems that I get stuck on these really simple things. The only thing I succeeded in was adding even more columns to vertically display data taunting myself.<issue_comment>username_1: I'm not sure why you get all of your rows back as a single list. The `writerow()` method of the `csv` module expects a single list to represent a row.
```
for val in Gridinfo:
writer.writerow(['val'])
```
Would therefore give each datapoint its own row (note however that `'val'` is a string literal, so your output from this code would just be rows of the string "val" and not your actual data).
The first thing to do is to chunk your single list into multiple lists of length 4. I've borrowed the chunking function from [here](https://stackoverflow.com/a/1751478/4799172); you can see other methods in the answers there depending on your exact case.
This will give you a nested list. That's perfect for the `writerows()` method (note, plural).
Try:
```
def chunks(l, n):
n = max(1, n)
return [l[i:i+n] for i in range(0, len(l), n)]
with open(csvfile, "w") as output:
writer = csv.writer(output, lineterminator='\n')
writer.writerows(chunks(Gridinfo, 4))
```
EDIT:
The `chunk()` function:
1. Uses a *list comprehension*, with *list slicing* for the sublists
2. `n = max(1, n)` is defensive programming. It basically stops you specifying a chunk length of `0` or less (which doesn't make sense and will throw `ValueError: range() arg 3 must not be zero` exception). For all intents and purposes you can remove it and it will work fine; there's no harm keeping it in to avoid such an error.
It is equivalent to:
```
def chunks(my_list, chunk_size):
new_list = [] # What we will return
chunk = [] # Individual sublist chunk
for item in my_list:
if len(chunk) < 3:
chunk.append(item)
else:
new_list.append(chunk) # Add the chunk to the output
chunk = [] # Reset for the next chunk
chunk.append(item) # Make sure the current "item" gets added to the new chunk
if len(chunk) >= 1: # Catch any stragglers that don't make a complete chunk
new_list.append(chunk)
return new_list
SUBLIST_LENGTH = 3
list_to_be_chunked = [1, 2, 3, 4, 5, 6, 7]
result = chunks(list_to_be_chunked, SUBLIST_LENGTH)
print(result)
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: ```
import numpy as np
import csv
csvfile = r'C:\temp\test.csv'
Gridinfo = ['5555', '1', 'Verified', '', '6666', '2', 'Verified', '']
arr = np.resize(Gridinfo,(len(Gridinfo)/4,4))
with open(csvfile, "w") as output:
writer = csv.writer(output, lineterminator='\n')
writer.writerows(arr)
#Output
5555 1 Verified
6666 2 Verified
```
Upvotes: 0
|
2018/03/15
| 774 | 2,159 |
<issue_start>username_0: I have a page with a banner and 3 columns, and I am trying to make it so that when the screen width gets too small, that the first 2 columns change from 1/3rd width to 50% width and the last column width change to 100% so that it's below the first two.
When I do this, the height of the columns does change (they change to 50%, considering the columns will now fit underneath each other inside a 100% row), but the width does not. How could I fix this? Thanks in advance!
[Codepen](https://codepen.io/SirExotic/pen/NYrjRP)
HTML
```
```
related CSS
```
.aboutBanner {
height: 30%;
border: 1px solid blue;
}
.row {
height: 70%;
}
.row .col {
height: 100%;
border: 1px solid red;
}
@media screen and (max-width: 768px){
.row .col:nth-of-type(1),
.row .col:nth-of-type(2) {
height: 50%;
width: 50%;
}
.row .col:nth-of-type(3) {
height: 50%;
width: 100%;
border-style: dashed;
}
}
```<issue_comment>username_1: Use bootstrap col-[size] class
==============================
[See documentation](https://getbootstrap.com/docs/4.0/layout/grid/)
The bootstrap grid is based in 12 columns sizes, that are divided in 5 screen sizes:
* Extra small
+ `col-[1 to 12]`
* Small
+ `col-sm-[1 to 12]`
* Medium
+ `col-md-[1 to 12]`
* Large
+ `col-lg-[1 to 12]`
* Extra large
+ `col-xl-[1 to 12]`
Try this:
```
content of first
content of second
content of third
```
*Keep in mind that 12 is 100%, 6 is 50%, 4 is 33%, 3 is 25% and so on...*
Upvotes: 1 <issue_comment>username_2: Use the [Bootstrap responsive grid columns](https://getbootstrap.com/docs/4.0/layout/grid/#responsive-classes)...
<https://codepen.io/anon/pen/eMzRyb>
```
```
To visualize this in your [codepen](https://codepen.io/anon/pen/eMzRyb) I changed the CSS...
```
.row {
height: 70%;
}
.row > div {
border: 1px solid red;
}
```
Also see:
[What is the difference among col-lg-\*, col-md-\* and col-sm-\* in Bootstrap?](https://stackoverflow.com/questions/19865158/what-is-the-difference-among-col-lg-col-md-and-col-sm-in-bootstrap)
Upvotes: 3 [selected_answer]
|
2018/03/15
| 2,467 | 9,607 |
<issue_start>username_0: Given this structure, how would I find the object with the given id in this deeply nested object structure.
```
const menuItems = [
{
id: 1,
imageUrl: "http://placehold.it/65x65",
display: "Shop Women",
link: "#",
type: "image",
nextItems: [
{
id: 10,
display: "홈",
link: "#",
type: "menuitem"
},
{
id: 20,
display: "의류",
link: "#",
type: "menuitem-withmore",
nextItems: [
{
id: 100,
display: "I'm inside one nest",
link: "#",
type: "menuitem"
}
]
},
{
id: 30,
display: "가방",
link: "#",
type: "menuitem-withmore",
nextItems: []
},
{
id: 40,
display: "신발",
link: "#",
type: "menuitem-withmore",
nextItems: []
},
{
id: 50,
display: "악세서리",
link: "#",
type: "menuitem-withmore",
nextItems: []
},
{
id: 60,
display: "SALE",
link: "#",
type: "menuitem-withmore",
style: "bold",
nextItems: []
},
{
id: 70,
display: "브랜드",
link: "#",
type: "menuitem-withmore",
nextItems: []
},
{
type: "separator"
},
{
id: 80,
display: "위시리스트",
link: "#",
type: "menuitem"
},
{
id: 90,
display: "고객센터",
link: "#",
type: "menuitem"
},
{
id: 99,
display: "앱 다운로드",
link: "#",
type: "menuitem"
}
]
},
{
id: 2,
imageUrl: "http://placehold.it/65x65",
display: "Shop Men",
link: "#",
type: "image",
nextItems: [
{
id: 95,
display: "MEN's ITEMS.",
link: "#",
type: "menuitem"
}
]
}
];
```
Let's say I want to find the object with `id: 20` and return this:
```
{
id: 20,
display: "의류",
link: "#",
type: "menuitem-withmore",
nextItems: [
{
id: 100,
display: "I'm inside one nest",
link: "#",
type: "menuitem"
}
]
},
```
I can't seem to find how to use lodash for this, and there's this package that may have solved my issue but I couldn't understand how to make it work for my use case.
<https://github.com/dominik791/obj-traverse><issue_comment>username_1: Use DFS.
```js
const menuItems = [
{
id: 1,
imageUrl: "http://placehold.it/65x65",
display: "Shop Women",
link: "#",
type: "image",
nextItems: [
{
id: 10,
display: "홈",
link: "#",
type: "menuitem"
},
{
id: 20,
display: "의류",
link: "#",
type: "menuitem-withmore",
nextItems: [
{
id: 100,
display: "I'm inside one nest",
link: "#",
type: "menuitem"
}
]
},
{
id: 30,
display: "가방",
link: "#",
type: "menuitem-withmore",
nextItems: []
},
{
id: 40,
display: "신발",
link: "#",
type: "menuitem-withmore",
nextItems: []
},
{
id: 50,
display: "악세서리",
link: "#",
type: "menuitem-withmore",
nextItems: []
},
{
id: 60,
display: "SALE",
link: "#",
type: "menuitem-withmore",
style: "bold",
nextItems: []
},
{
id: 70,
display: "브랜드",
link: "#",
type: "menuitem-withmore",
nextItems: []
},
{
type: "separator"
},
{
id: 80,
display: "위시리스트",
link: "#",
type: "menuitem"
},
{
id: 90,
display: "고객센터",
link: "#",
type: "menuitem"
},
{
id: 99,
display: "앱 다운로드",
link: "#",
type: "menuitem"
}
]
},
{
id: 2,
imageUrl: "http://placehold.it/65x65",
display: "Shop Men",
link: "#",
type: "image",
nextItems: [
{
id: 95,
display: "MEN's ITEMS.",
link: "#",
type: "menuitem"
}
]
}
];
function dfs(obj, targetId) {
if (obj.id === targetId) {
return obj
}
if (obj.nextItems) {
for (let item of obj.nextItems) {
let check = dfs(item, targetId)
if (check) {
return check
}
}
}
return username_3
}
let result = username_3
for (let obj of menuItems) {
result = dfs(obj, 100)
if (result) {
break
}
}
console.dir(result)
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: If you're using lodash, you just want the *.find(collection, [predicate=*.identity]).
So you'd want something like so:
```
_.find(menuItems, function(item) {
return item.id = 20;
});
```
Upvotes: -1 <issue_comment>username_3: Maybe this helps
```
menuItems.map(item => {
if (item.id === 10) return item;
});
```
BTW I didn't consider efficiency in this solution.
Upvotes: 1 <issue_comment>username_4: I would do it
```js
const menuItems = [
{
id: 1,
imageUrl: "http://placehold.it/65x65",
display: "Shop Women",
link: "#",
type: "image",
nextItems: [
{
id: 10,
display: "홈",
link: "#",
type: "menuitem"
},
{
id: 20,
display: "의류",
link: "#",
type: "menuitem-withmore",
nextItems: [
{
id: 100,
display: "I'm inside one nest",
link: "#",
type: "menuitem"
}
]
},
{
id: 30,
display: "가방",
link: "#",
type: "menuitem-withmore",
nextItems: []
},
{
id: 40,
display: "신발",
link: "#",
type: "menuitem-withmore",
nextItems: []
},
{
id: 50,
display: "악세서리",
link: "#",
type: "menuitem-withmore",
nextItems: []
},
{
id: 60,
display: "SALE",
link: "#",
type: "menuitem-withmore",
style: "bold",
nextItems: []
},
{
id: 70,
display: "브랜드",
link: "#",
type: "menuitem-withmore",
nextItems: []
},
{
type: "separator"
},
{
id: 80,
display: "위시리스트",
link: "#",
type: "menuitem"
},
{
id: 90,
display: "고객센터",
link: "#",
type: "menuitem"
},
{
id: 99,
display: "앱 다운로드",
link: "#",
type: "menuitem"
}
]
},
{
id: 2,
imageUrl: "http://placehold.it/65x65",
display: "Shop Men",
link: "#",
type: "image",
nextItems: [
{
id: 95,
display: "MEN's ITEMS.",
link: "#",
type: "menuitem"
}
]
}
];
var data = [];
menuItems.forEach(function(item) {
item.nextItems.forEach(function(element) {
data.push(element)
}, this);
}, this);
console.log(_.where(data, {id: 20}));
```
Upvotes: 1 <issue_comment>username_5: You may try this function, it will work with a dynamic change of the deep level
```
function findNodeById(nodes, id, callback?) {
let res;
function findNode(nodes, id) {
for (let i = 0; i < nodes.length; i++) {
if (nodes[i].id === id) {
res = nodes[i];
// you can also use callback back here for more options ;)
// callback(nodes[i]);
break;
}
if (nodes[i].nextItems) {
findNode(nodes[i].nextItems, id);
}
}
}
findNode(nodes, id)
return res;
}
findNodeById(menuItems, 99) // { id: 99, display: "앱 다운로드", link: "#", type: "menuitem" }
```
Upvotes: 2
|
2018/03/15
| 426 | 1,647 |
<issue_start>username_0: I am trying to make a single .jar executable file that reads .txt file and print some value.
my issue is that I don't want to specify the .txt file name before making the .jar, I want to pass the .jar to the user and each time before he run the .jar he will specify the desired .txt file to be read.
is there any way to do that?<issue_comment>username_1: You can make the user to insert the file name and the program adds it to the searching route ex: "C:\"+textname
Upvotes: -1 <issue_comment>username_2: You can get the command line argument from the String array passed to your Main method.
```
public static void main(String[] args) {
if (args.length <= 0) {
System.out.println("No arguments specified");
return;
}
String filename = args[0];
if (filename.trim().length() <= 0) {
System.out.println("Filename is empty");
return;
}
File file = new File(filename);
if (!file.exists()) {
System.out.println("File doesn't exist");
return
}
// Do what you want with the file here
}
```
If you want multiple files, you can do this by breaking each commandline argument into another file
```
public static void main(String[] args) {
if (args.length <= 0) {
System.out.println("No arguments specified");
return;
}
List files = new ArrayList<>();
for (String filename : args) {
File file = new File(filename);
if (!file.exists()) {
System.out.println("File doesn't exist");
continue;
}
files.add(file);
}
// Do what you want with your list of files here
}
```
Upvotes: 2 [selected_answer]
|
2018/03/15
| 530 | 2,021 |
<issue_start>username_0: I'm writing a Django app, but have a separate process for creating / managing the tables. In other words, I don't want Django to manage any of the DB tables. To accomplish this, I use `managed = False` in the `Meta` class, like:
```
class School(models.Model):
schoolid = models.IntegerField(primary_key=True)
schooldisplayname = models.CharField(max_length=100)
address = models.CharField(max_length=100)
city = models.CharField(max_length=100)
department = models.CharField(max_length=100)
class Meta:
managed = False
```
But it's annoying to have to always specify this for each model. Is there a way to apply this as a global setting to all my models by default?<issue_comment>username_1: You can make the user to insert the file name and the program adds it to the searching route ex: "C:\"+textname
Upvotes: -1 <issue_comment>username_2: You can get the command line argument from the String array passed to your Main method.
```
public static void main(String[] args) {
if (args.length <= 0) {
System.out.println("No arguments specified");
return;
}
String filename = args[0];
if (filename.trim().length() <= 0) {
System.out.println("Filename is empty");
return;
}
File file = new File(filename);
if (!file.exists()) {
System.out.println("File doesn't exist");
return
}
// Do what you want with the file here
}
```
If you want multiple files, you can do this by breaking each commandline argument into another file
```
public static void main(String[] args) {
if (args.length <= 0) {
System.out.println("No arguments specified");
return;
}
List files = new ArrayList<>();
for (String filename : args) {
File file = new File(filename);
if (!file.exists()) {
System.out.println("File doesn't exist");
continue;
}
files.add(file);
}
// Do what you want with your list of files here
}
```
Upvotes: 2 [selected_answer]
|
2018/03/15
| 1,892 | 4,975 |
<issue_start>username_0: I have a time series dataframe, the dataframe is quite big and contain some missing values in the 2 columns('Humidity' and 'Pressure'). I would like to impute this missing values in a clever way, for example using the value of the nearest neighbor or the average of the previous and following timestamp.Is there an easy way to do it? I have tried with fancyimpute but the dataset contain around 180000 examples and give a memory error [](https://i.stack.imgur.com/WCdT4.png)<issue_comment>username_1: Looks like your data is by hour. How about just take the average of the hour before and the hour after? Or change the window size to 2, meaning the average of two hours before and after?
Imputing using other variables can be expensive and you should only consider those methods if the dummy methods do not work well (e.g. introducing too much noise).
Upvotes: 0 <issue_comment>username_2: You could use `rolling` like this:
```
frame = pd.DataFrame({'Humidity':np.arange(50,64)})
frame.loc[[3,7,10,11],'Humidity'] = np.nan
frame.Humidity.fillna(frame.Humidity.rolling(4,min_periods=1).mean())
```
Output:
```
0 50.0
1 51.0
2 52.0
3 51.0
4 54.0
5 55.0
6 56.0
7 55.0
8 58.0
9 59.0
10 58.5
11 58.5
12 62.0
13 63.0
Name: Humidity, dtype: float64
```
Upvotes: 3 <issue_comment>username_3: Consider `interpolate` ([Series](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.interpolate.html) - [DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.interpolate.html)). This example shows how to fill gaps of any size with a straight line:
```
df = pd.DataFrame({'date': pd.date_range(start='2013-01-01', periods=10, freq='H'), 'value': range(10)})
df.loc[2:3, 'value'] = np.nan
df.loc[6, 'value'] = np.nan
df
date value
0 2013-01-01 00:00:00 0.0
1 2013-01-01 01:00:00 1.0
2 2013-01-01 02:00:00 NaN
3 2013-01-01 03:00:00 NaN
4 2013-01-01 04:00:00 4.0
5 2013-01-01 05:00:00 5.0
6 2013-01-01 06:00:00 NaN
7 2013-01-01 07:00:00 7.0
8 2013-01-01 08:00:00 8.0
9 2013-01-01 09:00:00 9.0
df['value'].interpolate(method='linear', inplace=True)
date value
0 2013-01-01 00:00:00 0.0
1 2013-01-01 01:00:00 1.0
2 2013-01-01 02:00:00 2.0
3 2013-01-01 03:00:00 3.0
4 2013-01-01 04:00:00 4.0
5 2013-01-01 05:00:00 5.0
6 2013-01-01 06:00:00 6.0
7 2013-01-01 07:00:00 7.0
8 2013-01-01 08:00:00 8.0
9 2013-01-01 09:00:00 9.0
```
Upvotes: 6 [selected_answer]<issue_comment>username_4: **Interpolate & Filna :**
Since it's Time series Question I will use o/p graph images in the answer for the explanation purpose:
Consider we are having data of time series as follows: (on x axis= number of days, y = Quantity)
```
pdDataFrame.set_index('Dates')['QUANTITY'].plot(figsize = (16,6))
```
[](https://i.stack.imgur.com/OkbTt.png)
We can see there is some NaN data in time series. % of nan = 19.400% of total data. Now we want to impute null/nan values.
I will try to show you o/p of interpolate and filna methods to fill Nan values in the data.
**interpolate() :**
1st we will use interpolate:
```
pdDataFrame.set_index('Dates')['QUANTITY'].interpolate(method='linear').plot(figsize = (16,6))
```
[](https://i.stack.imgur.com/Rqa5l.png)
NOTE: There is no time method in interpolate here
**fillna() with backfill method**
```
pdDataFrame.set_index('Dates')['QUANTITY'].fillna(value=None, method='backfill', axis=None, limit=None, downcast=None).plot(figsize = (16,6))
```
[](https://i.stack.imgur.com/ZZ3Vm.png)
**fillna() with backfill method & limit = 7**
limit: this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled.
```
pdDataFrame.set_index('Dates')['QUANTITY'].fillna(value=None, method='backfill', axis=None, limit=7, downcast=None).plot(figsize = (16,6))
```
[](https://i.stack.imgur.com/O1pgp.png)
I find fillna function more useful. But you can use any one of the methods to fill up nan values in both the columns.
For more details about these functions refer following links:
1. Filna: <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.fillna.html#pandas.Series.fillna>
2. <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.interpolate.html>
There is one more Lib: `impyute` that you can check out. For more details regarding this lib refer this link: <https://pypi.org/project/impyute/>
Upvotes: 3
|
2018/03/15
| 1,057 | 2,918 |
<issue_start>username_0: I am trying when hover the image and the title, the hidden content to be visible. I did something with hover the outer div, but now if I hover on blank space or the paragraph the effect is the same. How to affect the hidden element when the image and the title are hovered? Maybe I am doing all wrong.
```css
.box {
width: 320px;
position: relative;
}
img {
max-width: 100%;
height: auto;
}
.image>a {
position: relative;
display: block;
overflow: hidden;
text-decoration: none;
}
.image>a:after {
background: rgba(255, 99, 71, 0.75);
position: absolute;
left: 0;
top: 0;
bottom: 0;
right: 0;
opacity: 0;
-webkit-transition: all 0.4s ease-in-out 0s;
-moz-transition: all 0.4s ease-in-out 0s;
transition: all 0.4s ease-in-out 0s;
content: '';
display: block;
}
.details {
position: absolute;
top: 80%;
left: 0;
opacity: 0;
-webkit-transition: all 0.3s ease-in-out 0s;
-moz-transition: all 0.3s ease-in-out 0s;
transition: all 0.3s ease-in-out 0s;
}
.box:hover .image>a:after {
opacity: 1;
}
.box:hover .details {
top: 30%;
opacity: 1;
}
```
```html
[](#)
[Lorem Ipsum](#)
===================
[Hidden link](#)
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
```<issue_comment>username_1: You should target the div class that is the parent of the link tag, in this case, class= `details`
by default :
```
.details {
display: none;
}
While on Hover state
.details{
display : block;
}
```
Upvotes: 1 <issue_comment>username_2: **You must restructure your HTML**
I've placed the `.image` and `.details` inside another `div` and added the hover event to the new `div`, not only the `.box`
Try this:
```css
.box {
width: 320px;
position: relative;
}
img {
max-width: 100%;
height: auto;
}
.image>a {
position: relative;
display: block;
overflow: hidden;
text-decoration: none;
}
.image>a:after {
background: rgba(255, 99, 71, 0.75);
position: absolute;
left: 0;
top: 0;
bottom: 0;
right: 0;
opacity: 0;
-webkit-transition: all 0.4s ease-in-out 0s;
-moz-transition: all 0.4s ease-in-out 0s;
transition: all 0.4s ease-in-out 0s;
content: '';
display: block;
}
.details {
position: absolute;
top: 80%;
left: 0;
opacity: 0;
-webkit-transition: all 0.3s ease-in-out 0s;
-moz-transition: all 0.3s ease-in-out 0s;
transition: all 0.3s ease-in-out 0s;
}
.box:hover .hoverEffect:hover .image>a:after {
opacity: 1;
}
.box:hover .hoverEffect:hover .details {
top: 30%;
opacity: 1;
}
```
```html
[](#)
[Hidden link](#)
[Lorem Ipsum](#)
===================
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
```
Upvotes: 2
|
2018/03/15
| 633 | 1,800 |
<issue_start>username_0: I have a really annoying problem where "<" is displayed on my templates. [Right at the very top of the web page is this annoying "<" symbol.](https://i.stack.imgur.com/RmUeF.jpg) Could anyone please tell me what the cause of it is.<issue_comment>username_1: You should target the div class that is the parent of the link tag, in this case, class= `details`
by default :
```
.details {
display: none;
}
While on Hover state
.details{
display : block;
}
```
Upvotes: 1 <issue_comment>username_2: **You must restructure your HTML**
I've placed the `.image` and `.details` inside another `div` and added the hover event to the new `div`, not only the `.box`
Try this:
```css
.box {
width: 320px;
position: relative;
}
img {
max-width: 100%;
height: auto;
}
.image>a {
position: relative;
display: block;
overflow: hidden;
text-decoration: none;
}
.image>a:after {
background: rgba(255, 99, 71, 0.75);
position: absolute;
left: 0;
top: 0;
bottom: 0;
right: 0;
opacity: 0;
-webkit-transition: all 0.4s ease-in-out 0s;
-moz-transition: all 0.4s ease-in-out 0s;
transition: all 0.4s ease-in-out 0s;
content: '';
display: block;
}
.details {
position: absolute;
top: 80%;
left: 0;
opacity: 0;
-webkit-transition: all 0.3s ease-in-out 0s;
-moz-transition: all 0.3s ease-in-out 0s;
transition: all 0.3s ease-in-out 0s;
}
.box:hover .hoverEffect:hover .image>a:after {
opacity: 1;
}
.box:hover .hoverEffect:hover .details {
top: 30%;
opacity: 1;
}
```
```html
[](#)
[Hidden link](#)
[Lorem Ipsum](#)
===================
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
```
Upvotes: 2
|
2018/03/15
| 1,246 | 3,599 |
<issue_start>username_0: I am getting the following error with my `npm update` command:
```
> pngquant-bin@4.0.0 postinstall /var/www/pp/20180315202846/node_modules/pngquant-bin
> node lib/install.js
⚠ The `/var/www/pp/20180315202846/node_modules/pngquant-bin/vendor/pngquant` binary doesn't seem to work correctly
⚠ pngquant pre-build test failed
ℹ compiling from source
✔ pngquant pre-build test passed successfully
✖ RequestError: pngquant failed to build, make sure that libpng-dev is installed
at ClientRequest.req.once.err (/var/www/pp/20180315202846/node_modules/pngquant-bin/node_modules/got/index.js:111:21)
at Object.onceWrapper (events.js:272:13)
at ClientRequest.emit (events.js:180:13)
at ClientRequest.onConnect (/var/www/pp/20180315202846/node_modules/pngquant-bin/node_modules/tunnel-agent/index.js:168:23)
at Object.onceWrapper (events.js:272:13)
at ClientRequest.emit (events.js:180:13)
at Socket.socketOnData (_http_client.js:476:11)
at Socket.emit (events.js:180:13)
at addChunk (_stream_readable.js:269:12)
at readableAddChunk (_stream_readable.js:256:11)
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.1.3 (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! pngquant-bin@4.0.0 postinstall: `node lib/install.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the pngquant-bin@4.0.0 postinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
```
I already did:
```
apt-get install libpng-dev
```
but I still get this error.
Any ideas? (My distribution is Debian 9.2)<issue_comment>username_1: TLDR;
use node:latest instead of node:alpine for docker image
—
Hi, I was using docker image node:alpine and it crashes yesterday too. Seems to be a problem with some external package that laravel-mix requires. I tested on node:alpine node:8.10.0-alpine node:8.8.1-alpine nothing works. I tested laravel-mix 0.11.4 and 0.12.1 and 1.0 nothing works. But when I switch from node:alpine to node:latest, it works. Hope this helps
Upvotes: 2 <issue_comment>username_2: Perhaps you will finds some help here at the original issue.
<https://github.com/imagemin/pngquant-bin/issues/78>
He is a suggested fix: (copy/paste from: <https://github.com/imagemin/pngquant-bin/issues/78#issuecomment-374192838>)
As @velu76 and @sory19 said, I followed their method:
´sudo apt-get install libpng-dev´
´npm install -g pngquant-bin´
Since I had already 'npm install' in my repo/project folder, I then ran
´sudo rm -rf node\_modules´
then, still in my repository/project folder
´npm install --no-bin-links´
And it works right now.
I don't know if it's just a temporary fix, actually, but I can work for now.
Upvotes: 4 <issue_comment>username_3: For Ubuntu you need to run:
```
wget -q -O /tmp/libpng12.deb http://mirrors.kernel.org/ubuntu/pool/main/libp/libpng/libpng12-0_1.2.54-1ubuntu1_amd64.deb \
&& sudo dpkg -i /tmp/libpng12.deb \
&& rm /tmp/libpng12.deb
```
Upvotes: 5 <issue_comment>username_4: You can fix this error by executing this command :
```
sudo npm install -g name-of-package --unsafe-perm=true --allow-root
```
Upvotes: 0 <issue_comment>username_5: I was also facing same issue and tried all provided solutions but nothing works
Issue on windows 10, Node 16, npm 8
but by downgrading my node to 14 and npm to 6 solve my problem.
Upvotes: 0
|
2018/03/15
| 698 | 2,502 |
<issue_start>username_0: Is it possible to loop through this array in mysql and execute a delete statement where the category\_id column is equal to the current value been looped in that array below.
```
$row = [
'type' => '3',
'name' => 'Warez',
'category_id' => 'IAB26-2',
'parent_category_id' => 'IAB26'
],
[
'type' => '3',
'name' => 'Spyware/Malware',
'category_id' => 'IAB26-3',
'parent_category_id' => 'IAB26'
],
[
'type' => '3',
'name' => 'Copyright Infringement',
'category_id' => 'IAB26-4',
'parent_category_id' => 'IAB26'
],
[
'type' => '3',
'name' => 'Uncategorized',
'category_id' => 'IAB24'
]
```
I have tried this
```
foreach($row as $value) {
$this->execute('DELETE from categories WHERE category_id = '.$value['category_id'].'');
}
```
doesn't seem to work as it should. Is there is better Sql command for this kind of operation?<issue_comment>username_1: You probably just need to wrap the value you concatenate with quotes (because it's not numeric).
Something like this:
```
$this->execute('DELETE from categories WHERE category_id = "'.$value['category_id'].'"');
```
However, doing `DELETE ... WHERE` as suggested in one of the comments would yield better performance.
Also, note that if the data comes from the user then it may be susceptible to SQL injection, so you should read about and use parameterized queries rather than concatenation.
Upvotes: 0 <issue_comment>username_2: Using an IN clause
```
$list = array(); foreach ($row as $item) {
$list[] = "'{$item['category_id']}'";
}
$sql = 'DELETE from categories WHERE category_id IN ('.implode(',',$list).')';
$this->execute( $sql );
```
Upvotes: -1 <issue_comment>username_3: Your code is insecure because it's possible to inject SQL. So an attacker might execute whatever he wants in your SQL query.
**If** this is really CakePHP code, why aren't you using the ORM? You cleary should read <https://book.cakephp.org/3.0/en/orm.html> the Table Objects and Deleting Data section. If not don't use wrong tags.
**Assuming** this is done in a table object, the most simple way to do this is:
```
$ids = Hash::extract($data, '{n}.category_id');
if (!empty($ids)) {
$this->deleteAll([
'category_id IN' => $ids
]);
}
```
Upvotes: 1
|
2018/03/15
| 833 | 3,211 |
<issue_start>username_0: I'm learning the SQL server suite and just installed the Developer Edition. I included in my features the Integration and Analysis Services. As for the Reporting ones, I went back online (as indicated) and download/installed it.
Now, in my Start Menu / Microsoft SQL Server, I have the Reporting Services Configuration Manager. I click on it, a window pops asking me to indicate a server name and connect, which I do. At this point, everything seems to work and all is great and in the Current Report Server box, the status displays "active".
BUT! When I go into SQL Server Configuration Manager, it's not displayed.[](https://i.stack.imgur.com/U3mcq.png)
So I checked the Reporting Services Configuration Manager again in case if something changed there: negative, all the same and the status is still active.
If I try to connect to it in SQL Server Management Studio, the "Reporting Services" server type appears and allows me to choose my server:
[](https://i.stack.imgur.com/iMz4f.png)
It doesn't work as well.
[](https://i.stack.imgur.com/4DzIM.png)
I'm starting to think I need to delete the entire SQL Server thing and go through it again, but does any1 have any better solutions? (if additionnal screenshots are needed, please let me know and I'll edit).
Thanks to all!
**Edit**
Thanks to DeanOC's answer below, I was able to connect, however, the SQL Server Configuration Manager still doesn't display the Reporting Service server and, in SSMS, when I click on any of the three possible crosshairs I get the following error:
[](https://i.stack.imgur.com/KYsxG.png)
**EDIT 2**
I found the following instructions <https://support.microsoft.com/en-ca/help/956179/error-message-when-you-click-the-databases-node-in-sql-server-manageme>
But my SSMS crashes every time I right click on the column... <(-\_-")><issue_comment>username_1: In SSMS, when connecting to Reporting Services you have to specify the reporting services URL, not the name of the server hosting reporting services.
So in the 'Server name' box you need to type
<http://MAX-PC/reportserver>
Tip: if you are running SSMS on the server that is hosting reporting services, you can also use
<http://localhost/reportserver>
Upvotes: 3 <issue_comment>username_2: Victory! I found the issue. Firstly, I haven't completed the setup... So in the "Report Server Configuration Manager", I have omitted the "Database" tab and the setup it entails.
Although, at this point, I was able to access the Report Manager URL through my browser, I was still unable to connect to the "Reporting Services" through SSMS. And here's how I solved it:
I noticed that my Virtual Directory name in "Web Services URL" as well as "Web Portal URL" were identical, so assuming that this was possibly a cause for the error I changed them to different names: one was remained "SSRS", the other was changed to "reports". Bang!
Upvotes: 2 [selected_answer]
|
2018/03/15
| 324 | 1,255 |
<issue_start>username_0: My company has a web app as an App Service in Azure. Let's say it is at "app.scottpantall.com". Our client wants their users to use our web app, but they want their customers to go to "app.company.com" to get to our web app.
My company owns our domain and our client owns their domain.
If they were to create a CNAME record that pointed their domain to my domain, would Azure allow me to add their domain as a Hostname for my web app?
They have not done so yet and I get an alert about domain ownership when I try to add their domain as a hostname.
Thanks!
-Scott<issue_comment>username_1: never worked with Azure but this might help you with adding a custom domain to your app <https://learn.microsoft.com/en-us/azure/cloud-services/cloud-services-custom-domain-name-portal>
Upvotes: 0 <issue_comment>username_2: Yes, you can add any domain you want. The domain owner will need to add the CNAME record pointing to your .azurewebsites.net domain, and you will need to configure your App Service app to use that domain.
The steps needed to configure a custom domain in your DNS service and web app are at <https://learn.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-custom-domain>
Upvotes: 3 [selected_answer]
|
2018/03/15
| 870 | 3,535 |
<issue_start>username_0: I am using [FBSDK](https://github.com/facebook/react-native-fbsdk) to integrate Facebook Login. The button work fine on iOS but does not do anything on Android. I am using the latest release `0.7.0` of FBSDK.
I can see the Login dialog appearing and can input credentials just fine. After the login flow is completed neither of the `then` or `catch` branches of the promise from `LoginManager.logInWithReadPermissions(['public_profile'])` is called.
I tried debugging by logging to logcat by modifying the `onSuccess` method in `FBLoginManagerModule`, and the `onCancel` and `onError` methods in `ReactNativeFacebookSDKCallback` classes. None of them are in fact called, explaining the promise not being fulfilled.
I have followed all the steps integrating the SDK as per the official docs. Also, since everything works fine on iOS, I am not sure what exactly is wrong with Android. There are exactly zero errors in the process, just the login does not work.<issue_comment>username_1: Turns out, I had to explicitly add activity callbacks in my `MainApplication.java`
This worked fine:
```
setActivityCallbacks(new ActivityCallbacks() {
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
mCallbackManager.onActivityResult(requestCode, resultCode, data);
}
});
```
Upvotes: 1 <issue_comment>username_2: Thanks @username_1 that was the trick. To elaborate `setActivityCallbacks` must be added to the `public void onCreate` method and you must include `import com.reactnativenavigation.controllers.ActivityCallbacks;` and `import android.content.Intent;`
```
import com.reactnativenavigation.controllers.ActivityCallbacks;
import android.content.Intent;
public class MainApplication extends NavigationApplication {
@Override
public void onCreate() {
super.onCreate();
setActivityCallbacks(new ActivityCallbacks() {
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
mCallbackManager.onActivityResult(requestCode, resultCode, data);
}
});
FacebookSdk.sdkInitialize(getApplicationContext());
AppEventsLogger.activateApp(this);
}
}
```
Upvotes: 2 <issue_comment>username_3: A bit old, but I spent 2 hours making it works on Android.
For those coming from react-native-fbsdk-next and getting the same problem, you have to add the "LoginManager.logInWithPermissions" from [here](https://developers.facebook.com/docs/facebook-login).
So your code will looks like this:
```js
import React from 'react';
import {AccessToken, LoginButton, LoginManager} from 'react-native-fbsdk-next';
import {View} from 'react-native';
const Signin = () => {
LoginManager.logInWithPermissions(['public_profile']).then(
function (result) {
if (result.isCancelled) {
console.log('Login cancelled');
} else {
console.log(
'Login success with permissions: ' +
result.grantedPermissions.toString(),
);
}
},
function (error) {
console.log('Login fail with error: ' + error);
},
);
return (
{
if (error) {
console.log('login has error: ' + result.error);
} else if (result.isCancelled) {
console.log('login is cancelled.');
} else {
AccessToken.getCurrentAccessToken().then(data => {
console.log(data.accessToken.toString());
});
}
}}
onLogoutFinished={() => console.log('logout.')}
/>
);
};
export default Signin;
```
Upvotes: 1
|
2018/03/15
| 1,276 | 3,960 |
<issue_start>username_0: I am attempting to make an association rules set using apriori - I am using a different dataset but the starwars dataset contains similar issues. Using arules I was attempting to list the rules and apply an arulesViz plot. From my understanding all strings must be ran as factors, listed as transactions and then apriori should be functioning properly but I get the ouput below after running the following code and rules is not added to environment:
```
install.packages("arules")
install.packages("arulesViz")
library(arulesViz)
library(arules)
data <- starwars[,c(4:6,8:10)]
data <- data.frame(sapply(data,as.factor))
data <- as(data, "transactions")
rules <- apriori(data, parameter = list(supp = 0.15, conf = 0.80))
inspect(rules)
inspect(sort(rules))
subrules <- head(sort(rules, by="lift"), 10)
plot(subrules, method="graph")
```
The following is the output from running apriori
```
rules <- apriori(data, parameter = list(supp = 0.15, conf = 0.80))
Apriori
Parameter specification:
confidence minval smax arem aval originalSupport maxtime support minlen maxlen target ext
0.8 0.1 1 none FALSE TRUE 5 0.15 1 10 rules FALSE
Algorithmic control:
filter tree heap memopt load sort verbose
0.1 TRUE TRUE FALSE TRUE 2 TRUE
Absolute minimum support count: 78
set item appearances ...[0 item(s)] done [0.00s].
set transactions ...[131 item(s), 522 transaction(s)] done [0.00s].
sorting and recoding items ... [0 item(s)] done [0.00s].
creating transaction tree ... done [0.00s].
checking subsets of size 1 done [0.00s].
writing ... [0 rule(s)] done [0.00s].
creating S4 object ... done [0.02s].
Error in length(obj) : Method length not implemented for class rules
```
I have also ran this with the following argument changes
```
target = "rules"
```
And have attempted to run with using only null arguments
Any help is greatly appreciated!<issue_comment>username_1: If I run your code with `starwars` data, I get following results -
```
> data <- starwars[,c(4:6,8:10)]
> data <- data.frame(sapply(data,as.factor))
> data <- as(data, "transactions")
> rules <- apriori(data, parameter = list(supp = 0.15, conf = 0.80))
Apriori
Parameter specification:
confidence minval smax arem aval originalSupport maxtime support minlen maxlen target ext
0.8 0.1 1 none FALSE TRUE 5 0.15 1 10 rules FALSE
Algorithmic control:
filter tree heap memopt load sort verbose
0.1 TRUE TRUE FALSE TRUE 2 TRUE
Absolute minimum support count: 13
set item appearances ...[0 item(s)] done [0.00s].
set transactions ...[147 item(s), 87 transaction(s)] done [0.00s].
sorting and recoding items ... [8 item(s)] done [0.00s].
creating transaction tree ... done [0.00s].
checking subsets of size 1 2 3 done [0.00s].
writing ... [3 rule(s)] done [0.00s].
creating S4 object ... done [0.00s].
```
As you can see clearly, that there are 3 rules generated. Which means If I run inspect - I see following:
```
lhs rhs support confidence lift
[1] {skin_color=fair} => {species=Human} 0.1839080 0.9411765 2.339496
[2] {skin_color=fair} => {gender=male} 0.1609195 0.8235294 1.155598
[3] {eye_color=brown} => {species=Human} 0.1954023 0.8095238 2.012245
```
But if I run the same by increasing support count, I would have have 0 rules generated(so in your case - absolute support count is 78 for starwars dataset when you have only 87 observations).
So **you need to reduce(or adjust) the support or confidence** and so that you have atleast 1 rule or more than that. Also, the `target = "rules"` could not help as you can see that it is generating 0 rules.
Upvotes: 4 [selected_answer]<issue_comment>username_2: the problem is solved, update packages like as below.
`library(arules); search()`
`unloadNamespace("arules")`
`update.packages("arules")`
`library(arules)`
Upvotes: 1
|
2018/03/15
| 1,298 | 5,560 |
<issue_start>username_0: In my app some dialogs are opened from different places at the same time. (Some dialogs are self build fragments others AlertDialogs) This causes some of them to disappear, because the last called dialog closes all previously opened ones.
Is there a nice way to make them queue up and show after each other instead of glitch out like that?
I was considering making my own dialog class which instead of disposing the dialog it load up the next one and disposes once none are left in the queue. I am hoping there is an easier way to solve my problem without so much effort behind it.<issue_comment>username_1: You don't actually have to implement your own implementation of `Dialog` here in this case as far as I have understood. You just have to maintain your own queue data structure in sqlite database or somewhere else. When a dialog is popped up, show the first one from your queue and then on pressing positive button you need to dequeue the content you just have shown and then show the next one until the queue is empty.
Call `dialog.dismiss()` when no other content is left in your queue. This does not require implementing your own custom dialog. You can add an `onShowListener` to the `AlertDialog` where you can then override the `onClickListener` of the button.
```
final AlertDialog dialog = new AlertDialog.Builder(context)
.setView(v)
.setTitle(R.string.my_title)
.setPositiveButton(android.R.string.ok, null) // Set to null. We override the onclick
.setNegativeButton(android.R.string.cancel, null)
.create();
dialog.setOnShowListener(new DialogInterface.OnShowListener() {
@Override
public void onShow(DialogInterface dialogInterface) {
Button button = ((AlertDialog) dialog).getButton(AlertDialog.BUTTON_POSITIVE);
button.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
// TODO: Show content and dequeue
// Dismiss once all contents are shown
dialog.dismiss();
}
});
}
});
dialog.show();
```
Hope that helps.
**Update**
As far as I could understand, you need to show `Dialog` with new content along with a new layout each time you create those. In that case, you might consider achieve this behaviour in several ways. I would suggest using a `LocalBroadcastManager`.
In your `Activity`, create a `BroadcastReceiver` and register it in your `onCreate` function.
```
@Override
public void onCreate(Bundle savedInstanceState) {
...
// Register to receive messages.
// We are registering an observer (mMessageReceiver) to receive Intents
// with actions named "open-next-dialog".
LocalBroadcastManager.getInstance(this).registerReceiver(mMessageReceiver,
new IntentFilter("open-next-dialog"));
}
// Our handler for received Intents. This will be called whenever an Intent
// with an action named "open-next-dialog" is broadcasted.
private BroadcastReceiver mMessageReceiver = new BroadcastReceiver() {
@Override
public void onReceive(Context context, Intent intent) {
// Get extra data included in the Intent
String message = intent.getStringExtra("message");
int layoutId = intent.getIntExtra("layout_id");
showDialog(layoutId, message);
}
};
@Override
protected void onDestroy() {
// Unregister since the activity is about to be closed.
LocalBroadcastManager.getInstance(this).unregisterReceiver(mMessageReceiver);
super.onDestroy();
}
```
Now when a `Dialog` is clicked, you need to send a broadcast after dismissing the `Dialog`. So the `onClick` function of your `Dialog` will look like.
```
@Override
public void onClick(View view) {
// TODO: Do whatever you want to do in your onClick
// And then Dismiss the dialog
dialog.dismiss();
openNextDialogIfAny();
}
```
Don't forget to add `openNextDialogIfAny` function in your `Activity` as well which will trigger the next `Dialog`.
```
// Send an Intent with an action named "open-next-dialog". The Intent sent should be received by your Activity
private void openNextDialogIfAny() {
if(messageStack.isEmpty()) return; // No more dialogs to be shown.
Intent intent = new Intent("open-next-dialog");
intent.putExtra("message", messageStack.pop());
intent.putExtra("layout_id", layoutStack.pop());
LocalBroadcastManager.getInstance(YourActivity.this).sendBroadcast(intent);
}
```
Upvotes: 1 <issue_comment>username_2: You could try a solution that your very own question hinted at; a queue. This solution should work with any dialogs which extend the `Dialog` class.
To do so add a `Dialog` queue to your activity as a global variable:
```
LinkedBlockingQueue dialogsToShow = new LinkedBlockingQueue<>();
```
In addition to the above, implement, in the Activity that wants to show the dialogs, a method called `showDialog()` which accepts the target dialog as a parameter. This method will add the dialog to the queue and also ensure that the dialog calls the next one in the queue to be shown after it is dismissed.
```
void showDialog(final Dialog dialog) {
if(dialogsToShow.isEmpty()) {
dialog.show();
}
dialogsToShow.offer(dialog);
dialog.setOnDismissListener((d) -> {
dialogsToShow.remove(dialog);
if(!dialogsToShow.isEmpty()) {
dialogsToShow.peek().show();
}
});
}
```
Please note that I didn't test the above code.
Upvotes: 4 [selected_answer]
|
2018/03/15
| 1,450 | 4,865 |
<issue_start>username_0: ```
CREATE OR REPLACE PROCEDURE ADD_OFERTA(valorO IN NUMBER, ali IN VARCHAR,subast IN NUMBER) AS
fech DATE;
cedCompr INTEGER;
reqpor INTEGER;
reqmin INTEGER;
inicial NUMBER;
mejor NUMBER;
tmp INTEGER;
prueba NUMBER;
BEGIN
fech := SYSDATE;
select porcentaje into reqpor from requisito;
select incremento into reqmin from requisito;
select precioInicial INTO inicial FROM subasta Where fk_idofertaganadora is null and PK_idSubasta=subast;
EXCEPTION
WHEN NO_DATA_FOUND then
INICIAL:=0;
SELECT pk_idParticipante into cedCompr from PARTICIPANTE p where p.FK_ALIASUSUARIO = ali;
select o.valorOferta into mejor from subasta s inner join oferta o on(s.fk_idofertaganadora=o.pk_idoferta) where s.pk_idsubasta=subast;
EXCEPTION
WHEN NO_DATA_FOUND THEN-- ERROR here
INICIAL:=0;
select greatest((inicial),(mejor+reqmin),(mejor*(reqpor/100 + 1))) into prueba from dual;
IF (valorO>=prueba) THEN
INSERT INTO OFERTA(PK_idOferta,fecha,valoroferta,fk_idcomprador,fk_idsubasta) VALUES(ID_OFERTA.NEXTVAL,fech,valorO,cedCompr,subast);
update subasta set FK_idofertaganadora=ID_OFERTA.CURRVAL where pk_idsubasta=subast;
END IF;
END ADD_OFERTA;
```
My problem is that in the second Exeption Oracle displays:
Error(19,5): PLS-00103: Encountered the symbol "EXCEPTION" when expecting one of the following: ( begin case declare end exit for goto if loop mod null pragma raise return select update when while with << continue close current delete fetch lock insert open rollback savepoint set sql execute commit forall merge pipe purge
Error(28,15): PLS-00103: Encountered the symbol "end-of-file" when expecting one of the following: end not pragma final instantiable order overriding static member constructor map<issue_comment>username_1: You cannot have the same exception more than once in the same block. AFAIK, the EXCEPTION statement works like an Error-Catcher for the whole block, so you would want to break your proc into two different blocks, like this (I think, I have never done this myself):
```
CREATE OR REPLACE PROCEDURE ADD_OFERTA(valorO IN NUMBER, ali IN VARCHAR,subast IN NUMBER) AS
fech DATE;
cedCompr INTEGER;
reqpor INTEGER;
reqmin INTEGER;
inicial NUMBER;
mejor NUMBER;
tmp INTEGER;
prueba NUMBER;
BEGIN
fech := SYSDATE;
BEGIN
select porcentaje into reqpor from requisito;
select incremento into reqmin from requisito;
select precioInicial INTO inicial FROM subasta Where fk_idofertaganadora is null and PK_idSubasta=subast;
EXCEPTION
WHEN NO_DATA_FOUND then
INICIAL:=0;
END
BEGIN
SELECT pk_idParticipante into cedCompr from PARTICIPANTE p where p.FK_ALIASUSUARIO = ali;
select o.valorOferta into mejor from subasta s inner join oferta o on(s.fk_idofertaganadora=o.pk_idoferta) where s.pk_idsubasta=subast;
EXCEPTION
WHEN NO_DATA_FOUND THEN-- ERROR here
INICIAL:=0;
END
select greatest((inicial),(mejor+reqmin),(mejor*(reqpor/100 + 1))) into prueba from dual;
IF (valorO>=prueba) THEN
INSERT INTO OFERTA(PK_idOferta,fecha,valoroferta,fk_idcomprador,fk_idsubasta) VALUES(ID_OFERTA.NEXTVAL,fech,valorO,cedCompr,subast);
update subasta set FK_idofertaganadora=ID_OFERTA.CURRVAL where pk_idsubasta=subast;
END IF;
END ADD_OFERTA;
```
Upvotes: 1 <issue_comment>username_2: ```
CREATE OR REPLACE PROCEDURE ADD_OFERTA(valorO IN NUMBER, ali IN VARCHAR,subast IN NUMBER) AS
fech DATE;
cedCompr INTEGER;
reqpor INTEGER;
reqmin INTEGER;
inicial NUMBER;
mejor NUMBER;
tmp INTEGER;
prueba NUMBER;
BEGIN
fech := SYSDATE;
select porcentaje into reqpor from requisito;
select incremento into reqmin from requisito;
select precioInicial INTO inicial FROM subasta Where fk_idofertaganadora is null and PK_idSubasta=subast;
EXCEPTION
WHEN NO_DATA_FOUND then
BEGIN
INICIAL:=0;
SELECT pk_idParticipante into cedCompr from PARTICIPANTE p where p.FK_ALIASUSUARIO = ali;
select o.valorOferta into mejor from subasta s inner join oferta o on(s.fk_idofertaganadora=o.pk_idoferta) where s.pk_idsubasta=subast;
EXCEPTION
WHEN NO_DATA_FOUND THEN-- ERROR here
INICIAL:=0;
select greatest((inicial),(mejor+reqmin),(mejor*(reqpor/100 + 1))) into prueba from dual;
IF (valorO>=prueba) THEN
INSERT INTO OFERTA(PK_idOferta,fecha,valoroferta,fk_idcomprador,fk_idsubasta) VALUES(ID_OFERTA.NEXTVAL,fech,valorO,cedCompr,subast);
update subasta set FK_idofertaganadora=ID_OFERTA.CURRVAL where pk_idsubasta=subast;
END IF;
END;
END ADD_OFERTA;
```
Upvotes: 0
|
2018/03/15
| 1,376 | 3,544 |
<issue_start>username_0: I want to make a historical dataframe with values from time series dataframe.
Today, I have df1 as below:
```
df1:
A B C
0 1.0 2.0 3.0
```
Tomorrow, I will have df1 as below:
```
df1:
A B C
0 1.5 2.6 3.7
```
So the output I want tomorrow is as below:
```
df2:
A B C
0 1.0 2.0 3.0
1 1.5 2.6 3.7
```
I just want to keep add each day's new value from `df1` to a new dataframe `df2` so that I can make a historical dataframe with daily values. Can you help me on this? Thank you.<issue_comment>username_1: Use [pd.concat](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html)
```
df1 = pd.concat([df1, df2])
```
or [pd.DataFrame.append](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.append.html)
```
df1 = df1.append(df2)
```
Upvotes: 0 <issue_comment>username_2: From my understanding, you've got a source that updates once every day that you load to `df1`. Then you'd like to add that `df1` to a `df2` that stores all the values that you've seen in `df1` so far.
I'm basing my suggestion on a `df1` with the same structure as yours, but with random values. Every time you run this code, it will append those values to a text file `df2.txt` stored in the folder `c:\timeseries`.
Here we go:
---
Add a folder `C:/timeseries/` to your system. Then add an empty `.txt` file, enter the string `dates,A,B,C`, and save it as `df2.txt`.
The following snippet will take the length of that textfile and use that to build on a daily index to mimic your situation. That index will be the date for your `df1` that is otherwise filled with random numbers every time the snippet is run. And for each time the snippet is run, the data from `df1` will be appended to `df2`.
So, run this snippet once...
```
# imports
import os
import pandas as pd
import numpy as np
os.chdir('C:/timeseries/')
# creates df1 with random numbers
df1 = pd.DataFrame(np.random.randint(0,10,size=(1, 3)), columns=list('ABC'))
# Read your historic values (will be empty the first time you run it)
df2 = pd.read_csv('df2.txt', sep=",")
df2 = df2.set_index(['dates'])
# To mimic your real life situation, I'm adding a timeseries with a datestamp
# that starts where df2 ends. If df2 i empty, it starts from 01.01.2018
# Make a dummy datelist to mimic your situation
datelist = pd.date_range(pd.datetime(2018, 1, len(df2)).strftime('%Y-%m-%d'), periods=1).tolist()
df1['dates'] = datelist
df1 = df1.set_index(['dates'])
df1.index = pd.to_datetime(df1.index)
df2 = df2.append(df1)
df2.to_csv('df2.txt')
print(df2)
```
... to get this output:
```
A B C
dates
2018-01-01 00:00:00 8.0 6.0 8.0
```
Those are the current values of `df1` and `df2` at the time being. I'm not using a random seed here, so your data will differ from mine.
Run it ten times in a row and you'll get this:
```
A B C
dates
2018-01-01 00:00:00 8.0 6.0 8.0
2018-01-02 00:00:00 9.0 1.0 0.0
2018-01-03 00:00:00 3.0 1.0 3.0
2018-01-04 00:00:00 4.0 7.0 6.0
2018-01-05 00:00:00 1.0 4.0 3.0
2018-01-06 00:00:00 3.0 7.0 6.0
2018-01-07 00:00:00 8.0 6.0 4.0
2018-01-08 00:00:00 4.0 7.0 0.0
2018-01-09 00:00:00 0.0 9.0 8.0
2018-01-10 00:00:00 8.0 4.0 8.0
```
In order to start from scratch, go ahead and delete all rows but the first in your df2.txt file.
I hope this is what you're looking for. If not, let me know.
Upvotes: 1
|
2018/03/15
| 331 | 1,199 |
<issue_start>username_0: Using the following Typescript code in an Angular5 project:
```
`addExpense(expense: Expense): Promise {
expense.id = uuidv4();
const json = JSON.stringify(expense);
const headers = new Headers({
'Content-Type': 'application/json'
});
const options = new RequestOptions({
headers: headers
});
return this.http.post(`${this.apiUrl}/expenses/`, json, options)
.toPromise();
}
```
`
I get a compiler error `[ts] Type 'Promise' is not assignable to type 'Promise.`<issue_comment>username_1: Try this:
```
return this.http.post(`${this.apiUrl}/expenses/`, json, options)
.toPromise();
```
Upvotes: 0 <issue_comment>username_2: I have solved it (for now) by changing `Promise`to `Promise`. The compiler error goes away and I can add an expense to my backend
Upvotes: 0 <issue_comment>username_3: Your function `addExpense()` expects to return a Promise of type void as you specified here:
```
addExpense(expense: Expense): Promise
```
But you are returning a promise of type response at the end of your function.
Tell it that you will return a promise of type response by changing the first line to:
```
addExpense(expense: Expense): Promise
```
Upvotes: 1
|
2018/03/15
| 476 | 1,740 |
<issue_start>username_0: I am trying to insert one specific row from one Table to other tables with same name but in different databases, something like the following:
```
Table Makers from database A
id | name | type | comments
01 OSLO A None
Table Makers from database B is empty but same structure as A
id | name | type | comments
Table Makers from database C is empty but same structure as A and B
id | name | type | comments
```
I have a query that does the job but I need to change the name of the database manually, that is.
```
Use database_B
Go
Insert into Makers
([id], [name], [type], [comments])
select [id], [name], [type], [comments] from database_A
where id = '01'
Use database_C
Go
Insert into Makers
([id], [name], [type], [comments])
select [id], [name], [type], [comments] from database_A
where id = '01'
```
Is there an alternative to do this same process without repeating the same insertion code?
Any suggestion to improve this question to make it more valuable is welcome.<issue_comment>username_1: Try this:
```
return this.http.post(`${this.apiUrl}/expenses/`, json, options)
.toPromise();
```
Upvotes: 0 <issue_comment>username_2: I have solved it (for now) by changing `Promise`to `Promise`. The compiler error goes away and I can add an expense to my backend
Upvotes: 0 <issue_comment>username_3: Your function `addExpense()` expects to return a Promise of type void as you specified here:
```
addExpense(expense: Expense): Promise
```
But you are returning a promise of type response at the end of your function.
Tell it that you will return a promise of type response by changing the first line to:
```
addExpense(expense: Expense): Promise
```
Upvotes: 1
|
2018/03/15
| 943 | 3,761 |
<issue_start>username_0: Well i have a problem. I have a registration for for legal users and natural users, but i need to write a validation in each php file for username checking, i have no idea how to combine two table checking.
one table is called users\_legal, 2nd one is users\_natural. In both forms name for input field is "username".
So far i have a code that checks passwords :
```
if ($password == $password_re)
{
// insert into table
$sql = $db->prepare("INSERT INTO users_legal(name, reg_number, address, phone, email, username, password) VALUES(?, ?, ?, ?, ?, ?, ?);");
$sql->bind_param("sssssss", $name, $reg_number, $address, $phone, $email, $username, $password);
$sql->execute();
$sql->close();
}
```
After makeing a validation in register forms, i also need it in login page. I figured out how to check if there is only and only one user with that username and that password, but i have no idea how to search them between tables.
login.php code :
```
if($_SERVER["REQUEST_METHOD"] == "POST") {
// username and password sent from form
$myusername = mysqli_real_escape_string($db,$_POST['username']);
$mypassword = mysqli_real_escape_string($db,$_POST['password']);
$sql = "SELECT id FROM login WHERE username = '$myusername' and password = '$<PASSWORD>'";
$result = mysqli_query($db,$sql);
$row = mysqli_fetch_array($result,MYSQLI_ASSOC);
$active = $row['active'];
$_SESSION['username'] = $myusername;
$count = mysqli_num_rows($result);
if($count == 1) {
session_register("myusername");
$_SESSION['username'] = $myusername;
$_SESSION['loggedin'] = true;
header("location: www.goole.lv");
}else {
$error = "Your Login Name or Password is invalid";
}
}
```
One more thing : i set my mysql to utf format, var\_dump says that input is allright, but in mysql it saves in unbelievuble forms, like Ķegums or Skrīvelis.
Thnx for any examples, tips or whateva u got.<issue_comment>username_1: When I got you right, you have two tables with users. To validate if an user has logged in successfully you look up their login credentials in the related database table.
You are asking for 'combining' these two tables. But I don't think that that's what you want. You have two separate user tables. They do not belong to each other. If you join those tables, you might have dulicate unique user ids when combining these tables.
What you could do instead is check both tables separately, first for users\_legal and second for users\_natural.
You should also think about using password hashes instead of plain passwords in your db. And use pdo ;)
Good luck
Upvotes: 1 [selected_answer]<issue_comment>username_2: To solve the problem of having two different types of users I would just put them in the same table and add a value that represents the user type for example 0 = legal and 1 = natural. This will also automatically prevent two users from sharing the same username. (This will only work for sure if the database is still empty, if not you might end up with two users with the same name). For the character encoding try setting mysql to utf-8 if you haven’t done it already (instead of just utf). Also you should never save passwords in plaintext. Use the sha1 function in php to convert them to their hash value before storing them. That way even if someone gets unauthorized access to the database they still won’t know the passwords. You should also append some user specific information to the password before hashing so that even if two users share the same password their hash values will be different. To verify if it’s correct you just apply the same procedure to the input before comparing it with the hash value you have stored.
Upvotes: -1
|
2018/03/15
| 426 | 1,164 |
<issue_start>username_0: From a unix shell scripting file, I want to extract the values in a csv file. Suppose, I have a csv file values.csv with headers V1, V2 and N like below:
```
"V1","V2","N"
"0","0",216856
"1","0",16015
"0","1",25527
"1","1",10967
```
I want to extract the column N values and assign it to a variable in a unix script file. For example,
```
a = 216856
b = 16015
c = 25527
d = 10967
```
How will you pull the values of N and assign to the variables a,b,c,d in a shell script file? Please help.
Thanks in advance.<issue_comment>username_1: Don't use individual variable names, use an array. Install [csvkit](https://csvkit.readthedocs.io), then
```
mapfile -t n < <(csvcut -c "N" file.csv)
echo "a = ${n[1]}"
echo "b = ${n[2]}"
echo "c = ${n[3]}"
echo "d = ${n[4]}"
```
Note that CSV can be surprisingly tricky to parse, so use a proper CSV parser.
Upvotes: 1 <issue_comment>username_2: This will work in Bash, assuming you would want to use arrays, which is the right way to do this :
```
array=( $( tail -n +2 values.csv | awk -F',' '{print $3}' | tr '\n' ' ' ) )
echo "${array[0]}"
```
Regards!
Upvotes: 1 [selected_answer]
|
2018/03/15
| 984 | 2,802 |
<issue_start>username_0: receiving the following error message:
Error in mutate\_impl(.data, dots) :
Evaluation error: argument "no" is missing, with no default.
```
mutate(x,perfLev= ifelse(SS< 1438, "Below Basic",
ifelse(SS>= 1439 & SS <= 1499, "Basic",
ifelse(SS >= 1500 & SS <= 1545, "Proficient",
ifelse(SS >= 1546, "Advanced")))))
```<issue_comment>username_1: `case_when` is used to vectorize multiple `if/else` statements
```
library(dplyr)
mutate(x,perfLev= case_when(
SS < 1438 ~ "Below Basic",
SS >= 1439 & SS <= 1499 ~ "Basic",
SS >= 1500 & SS <= 1545 ~ "Proficient",
SS >= 1546 ~ "Advanced"))
```
Upvotes: 1 <issue_comment>username_2: Using comments by Make212 and Renu, here's one option for fixing it:
```
library(dplyr)
mutate(x,
perfLev = case_when(
SS < 1438 ~ "Below Basic",
SS >= 1439 & SS <= 1499 ~ "Basic",
SS >= 1500 & SS <= 1545 ~ "Proficient",
SS >= 1546 ~ "Advanced",
TRUE ~ "huh?"
) )
```
I added a "default" (`TRUE`), which is generally good (explicit code). Note that if you do not include the `TRUE`, then it would get an `NA` value, in case that's what you want. I can see it happening here if any of the following are true:
* `is.na(SS)`
* `SS >= 1438 & SS < 1439`
* `SS > 1499 & SS < 1500`
* `SS > 1545 & SS < 1546`
You may not need it if `NA` is acceptable and you are *guaranteed* of `SS`'s integrality.
This code is equivalent to a slight fix to your code:
```
mutate(x,
perfLev =
ifelse(SS < 1438, "Below Basic",
ifelse(SS >= 1439 & SS <= 1499, "Basic",
ifelse(SS >= 1500 & SS <= 1545, "Proficient",
ifelse(SS >= 1546, "Advanced", "huh?"))))
)
```
Indentation for style/clarity only.
Upvotes: 2 <issue_comment>username_3: Though OP has mentioned about problem with `ifelse` use in `mutate` but I thought to mention that in such scenario `cut` provides better option.
One can simply write as:
```
library(dplyr)
x %>%
mutate(perfLev = cut(SS, breaks = c(0, 1438, 1499, 1545, +Inf),
labels = c("Below Basic", "Basic", "Proficient", "Advanced")))
#OR
x$perfLev <- cut(SS, breaks = c(0, 1438, 1499, 1545, +Inf),
labels = c("Below Basic", "Basic", "Proficient", "Advanced"))
```
>
> The logic to match `breaks` with `labels` can be simplified by writing
> in down in a tabular format and use it as hint. The option for the above case could be as:
>
>
>
> ```
> 0
> 1438 -- "Below Basic"
> 1499 -- "Basic"
> 1545 -- "Proficient"
> +inf -- "Advanced"
>
> ```
>
>
Upvotes: 1
|
2018/03/15
| 246 | 932 |
<issue_start>username_0: I'm creating a desktop-only Windows application and will use Electron, which uses Chromium. Chromium supports WebGL 2. I've read that WebGL on Windows is translated to DirectX calls by the ANGLE library (so OpenGL is not used at all).
Is it true that on each GPU where WebGL 1 runs, then WebGL 2 will run too? What's the required DirectX version for WebGL 1 and WebGL 2?<issue_comment>username_1: of course not. each version requires different extension impl on GPU, need to be checked separately. for example try <http://webglreport.com/> what ext for each version requires.
Upvotes: 0 <issue_comment>username_2: No, Your hardware/drivers needs to support the appropriate opengl version:
* WebGL 1.0 exposes the OpenGL ES 2.0 feature set;
* WebGL 2.0 exposes the OpenGL ES 3.0 API.
[source](https://www.khronos.org/webgl/)
So if your hardware only supports OpenGL ES 2.0, webgl2 won't run.
Upvotes: -1
|
2018/03/15
| 452 | 1,576 |
<issue_start>username_0: I keep getting the error:
"TypeError: Cannot read property 'push' of undefined"
The error is in this line: "this.courseName.push(dish);"
Here is the code:
```
let menu = {
_courses: {
_appetizers:[] ,
_mains:[],
_desserts: []
},
addDishToCourse(courseName, dishName, dishPrice){
const dish = { name: this.dishName, price: this.dishPrice };
// ERROR HERE: **this._courses[courseName].push(dish);**
},
};
```
I have been stuck with this error for hours. Is there something I'm not seeing?
Sorry if there is a simple solution to this. I'm relatively new to JavaScript.
**Edit:**
I was indeed calling the key incorrectly. I am now calling the function as:
```
menu.addDishToCourse("_appetizers", "random", 10);
console.log(menu._courses._appetizers);
```
When I log this in compiler it returns:
```
[ { name: undefined, price: undefined } ]
```
It seems like the function isn't getting called. Am I approaching this fundamentally incorrectly?<issue_comment>username_1: of course not. each version requires different extension impl on GPU, need to be checked separately. for example try <http://webglreport.com/> what ext for each version requires.
Upvotes: 0 <issue_comment>username_2: No, Your hardware/drivers needs to support the appropriate opengl version:
* WebGL 1.0 exposes the OpenGL ES 2.0 feature set;
* WebGL 2.0 exposes the OpenGL ES 3.0 API.
[source](https://www.khronos.org/webgl/)
So if your hardware only supports OpenGL ES 2.0, webgl2 won't run.
Upvotes: -1
|
2018/03/15
| 1,953 | 6,855 |
<issue_start>username_0: I am trying to figure out if an input list is a strictly increasing list. Moreover, If removing only one element from the list results in a strictly increasing list, we still consider the list true. Here is my code. It seems to have an index error, but I do not understand why.
```
def almostIncreasingSequence(sequence):
n=len(sequence)
count=0
if n<=2:
return True
for i in range (n-1):
#test if the i-th element is bigger or equal to the elements after it. If it is, remove that element, and add one to count
for j in range (i+1,n):
if sequence[i]>=sequence[j]:
sequence.pop(i)
count+=1
#if there is more than one element that has to be taken out, it's false
if count>1:
return False
return True
```<issue_comment>username_1: If you've written `for i in range(len(some_list))` in Python, you've probably done the wrong thing. This is indeed why this is failing. `n` is the length of the sequence as it stands before any processing, but that length can change as you `pop` items from the list.
Better instead is to compare each mark which indices need to be removed and do them all at once, or better yet -- don't remove them at all!! It's a side-effect that's not well-explained.
You can build this by building a list of all sequences that might be strictly increasing using `itertools.combinations`, comparing each pair with `itertools`'s `pairwise` recipe, then short-circuiting as long as at least one is.
```
import itertools
def pairwise(iterable):
(a, b) = itertools.tee(iterable)
next(b, None) # advance b
return zip(a, b)
def almostIncreasingSequence(sequence):
if not sequence:
return True
# in case of empty list
combos = itertools.combinations(sequence, len(sequence)-1)
# combos is each ordered combination that's missing one element
# it is processed as an iterator, so will do no extra work if we can
# exit early.
def strictly_increasing(cs):
return all(a < b for (a, b) in pairwise(cs))
return any(strictly_increasing(c) for c in combos)
```
Upvotes: 2 <issue_comment>username_2: ```
def almost_increasing_sequence(sequence):
if len(sequence) < 3:
return True
a, b, *sequence = sequence
skipped = 0
for c in sequence:
if a < b < c: # XXX
a, b = b, c
continue
elif b < c: # !XX
a, b = b, c
elif a < c: # X!X
a, b = a, c
skipped += 1
if skipped == 2:
return False
return a < b
if __name__ == '__main__':
assert almost_increasing_sequence([]) is True
assert almost_increasing_sequence([1]) is True
assert almost_increasing_sequence([1, 2]) is True
assert almost_increasing_sequence([1, 2, 3]) is True
assert almost_increasing_sequence([3, 1, 2]) is True
assert almost_increasing_sequence([1, 2, 3, 0, 4, 5, 6]) is True
assert almost_increasing_sequence([1, 2, 3, 0]) is True
assert almost_increasing_sequence([1, 2, 0, 3]) is True
assert almost_increasing_sequence([10, 1, 2, 3, 4, 5]) is True
assert almost_increasing_sequence([1, 2, 10, 3, 4]) is True
assert almost_increasing_sequence([1, 2, 3, 12, 4, 5]) is True
assert almost_increasing_sequence([3, 2, 1]) is False
assert almost_increasing_sequence([1, 2, 0, -1]) is False
assert almost_increasing_sequence([5, 6, 1, 2]) is False
assert almost_increasing_sequence([1, 2, 3, 0, -1]) is False
assert almost_increasing_sequence([10, 11, 12, 2, 3, 4, 5]) is False
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: The only thing you need to do is walk the list, counting the number of times `sequence[i] > sequence[i+1]`. If it happens at most once, your list is almost monotonically increasing.
```
def almostIncreasingSequence(sequence):
count = 0
for i in range(0, len(sequence) - 1):
if sequence[i] > sequence[i+1]:
count += 1
return count < 2
```
You can also avoid counting, since the number of exceptions is small. Just return `False` as soon as you find the second exception, as tracked by the value of a Boolean variable initialized to `True`.
```
def almostIncreasingSequence(sequence):
increasing = True
for i in range(0, len(sequence) - 1):
if sequence[i] > sequence[i+1]:
if increasing:
increasing = False
else:
return False
return True
```
Upvotes: -1 <issue_comment>username_4: Alright, so it turns out this problem is not *that* easy.
If you want an efficient solution, I think your best bet may be an algorithm similar to the [longest increasing subsequence problem](https://en.wikipedia.org/wiki/Longest_increasing_subsequence).
But here, we don't care about the actual longest increasing subsequence - we just need it's length. Also, we can short-circuit when maintaining our ordered list if we have had to perform `n` insertions already (where `n` is our restriction on the number of "out of order" elements).
This also generalizes very well to the `n` element "almost increasing" case, and in the worst case performs `n-1` binary searches on lists of size `M-n-1` to `M`, where `M` is the size of the list.
```
import bisect
def almost_increasing(li, n=1):
if len(li) < 2:
return True
ordered_li = [li[0]]
violator_count = 0
for ele in li[1:]:
if ele < ordered_li[0]:
violator_count += 1
ordered_li[0] = ele
elif ele > ordered_li[-1]:
ordered_li.append(ele)
else:
violator_count += 1
insertion_pos = bisect.bisect_right(ordered_li, ele)
ordered_li[insertion_pos] = ele
if violator_count > n: return False
return True
```
The idea behind this algorithm is as follows:
* We move through the list, and maintain an ordered subsequence of our list all the while.
* When we reach a new element
+ if that element cannot be appended onto our ordered subsequence, it is a "violator" of the increasing property. We subsequently insert it into the ordered subsequence in the correct position, using [`bisect`](https://docs.python.org/3/library/bisect.html) for binary search.
+ otherwise, we just append it to our ordered subsequence and continue on.
* At the end of each iteration, if we have too many violators already we can short-circuit out. Otherwise, after the loop is done we are guaranteed to have an increasing subsequence that has length within `n` of the length of our original list.
---
**Demo**
```
>>> almost_increasing([5, 1, 2, 3, 4])
True
>>> almost_increasing([1, 2, 5, 2, 15, 0, 176])
False
>>> almost_increasing([1, 2, 5, 2, 15, 0, 176], 2)
True
```
Upvotes: 2
|
2018/03/15
| 1,104 | 3,821 |
<issue_start>username_0: I tried to look at similar questions but they are of no use for me.
I have a class:
```
data class TextMessage(val text: String,
override val time: Date,
override val senderId: String,
override val isText: Boolean = true)
: Message{
constructor() : this("", Date(0), "") }
```
Then I try to store an instance of it in Firestore:
```
fun sendTextMessage(message: TextMessage, channelId: String) {
chatChannelsCollectionRef.document(channelId)
.collection("messages")
.add(message)
}
```
For some reason whenever I call `sendTextMessage` I get this exception:
```
java.lang.RuntimeException: Found conflicting getters for name isText on class com.resocoder.firemessageprep.model.TextMessage
at com.google.android.gms.internal.zzevb$zza.(Unknown Source:191)
at com.google.android.gms.internal.zzevb.zzg(Unknown Source:12)
at com.google.android.gms.internal.zzevb.zza(Unknown Source:285)
at com.google.android.gms.internal.zzevb.zzbw(Unknown Source:2)
at com.google.firebase.firestore.zzk.zzcd(Unknown Source:36)
at com.google.firebase.firestore.CollectionReference.add(Unknown Source:5)
at com.resocoder.firemessageprep.util.FirestoreUtil.sendTextMessage(FirestoreUtil.kt:138)
at com.resocoder.firemessageprep.ChatActivity$onCreate$1$2.onClick(ChatActivity.kt:52)
at android.view.View.performClick(View.java:6256)
at android.view.View$PerformClick.run(View.java:24701)
at android.os.Handler.handleCallback(Handler.java:789)
at android.os.Handler.dispatchMessage(Handler.java:98)
at android.os.Looper.loop(Looper.java:164)
at android.app.ActivityThread.main(ActivityThread.java:6541)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.Zygote$MethodAndArgsCaller.run(Zygote.java:240)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:767)
```
It doesn't make sense. I know that TextMessage implements an interface which has a property isText but that shouldn't be the cause... right?
Thanks for reading this far!<issue_comment>username_1: The problem in your code is that you are using a field named `isText`. In Cloud Firestore the coresponding getter is `getText()` and NOT `getIsText()` as expected.
If you try to change the name of the field in, let's say, `izText` instead of `isText` and to have the corresponding getter like `getIzText()`, your code will work perfectly fine. Firestore removes the `is` prefix from the getter, that's why you have that conflict. For more informations, you can also take a look at this **[video](https://www.youtube.com/watch?v=idywm80Qbvc&list=PLn2n4GESV0AmXOWOam729bC47v0d0Ohee&t=4m09s&index=3)**.
If you decide to change that field name, don't forget to remove the old data and add fresh one.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The generated class file for TextMessage has the following methods defined. This was obtained by running [javap](https://docs.oracle.com/javase/8/docs/technotes/tools/windows/javap.html) against it:
```
public final java.lang.String getText();
public java.util.Date getTime();
public java.lang.String getSenderId();
public boolean isText();
```
The Firestore SDK is confused because it derives the names of document properties from the names of the getters in the class. So, by JavaBean convention, getText() becomes "text". And isText() also becomes "text". Firestore doesn't know which one you wanted to use for the document property called "text", hence the message.
You will have to change the name of one or the other to avoid this conflict. Alternately, you can try to use the [PropertyName](https://firebase.google.com/docs/reference/android/com/google/firebase/firestore/PropertyName) annotation to alter Firestore's naming of the field for either one.
Upvotes: 1
|
2018/03/15
| 860 | 3,075 |
<issue_start>username_0: I'm trying to use this lib in my [create-react-kotlin-app](https://github.com/JetBrains/create-react-kotlin-app):
<https://material-ui-next.com/>
I want to generate bunch of type safe wrappers. I started like this:
```
@file:JsModule("material-ui")
package material
import react.RState
import react.React
import react.ReactElement
external class Typography : React.Component {
override fun render(): ReactElement
}
```
...
```
fun RBuilder.typography(
classes: String = "",
variant: Variant = Variant.body1,
align: Align = Align.inherit,
color: Color = Color.default,
gutterBottom: Boolean = false,
noWrap: Boolean = false,
paragraph: Boolean = false,
handler: RHandler
) = child(Typography::class) {
attrs {
this.className = classes
this.align = align.name
this.color = color.name
this.variant = variant.name
this.gutterBottom = gutterBottom
this.noWrap = noWrap
this.paragraph = paragraph
}
handler()
}
```
And use it like:
```
typography(variant = Variant.title, color = Color.inherit) {
+"Hello World"
}
```
Is it correct approach?<issue_comment>username_1: I'm also interested in adding a ReactJS 3rd party component to Kotlin code. Please take a look at the following post, I'd appreciate any feedback regarding the solutions proposed there:
[How to import node module in React-Kotlin?](https://stackoverflow.com/questions/51431848/how-to-import-node-module-in-react-kotlin)
Upvotes: 0 <issue_comment>username_2: Indeed that is the correct way but can make it to be the best as follows
MaterialUi.kt
```
@file:JsModule("material-ui")
package material
import react.RState
import react.RProps
import react.React
import react.ReactElement
external interface TypographyProps : RProps {
var className: String
var align : String
var color : String
var variant : String
var gutterBottom : String
var noWrap : String
var paragraph : String
}
@JsName("Typography")
external class Typography : RComponent {
override fun render(): ReactElement
}
```
And then create another file, say
MaterialUiDsl.kt
```
fun RBuilder.typography(
classes: String = "",
variant: Variant = Variant.body1,
align: Align = Align.inherit,
color: Color = Color.default,
gutterBottom: Boolean = false,
noWrap: Boolean = false,
paragraph: Boolean = false,
handler: RHandler // notice the change here
) = child(Typography::class) {
attrs {
this.className = classes
this.align = align.name
this.color = color.name
this.variant = variant.name
this.gutterBottom = gutterBottom
this.noWrap = noWrap
this.paragraph = paragraph
}
handler()
}
```
If the above file already seems verbose to you (Like how I often feel), you can change it to just
MaterialUiDsl.kt
```
fun RBuilder.typography(handler: RHandler) = child(Typography::class,handler)
```
The you can use it whenever like this
```
typography {
attr {
className = "my-typo"
color = "#ff00ff"
//. . .
}
}
```
Easy peasy
Upvotes: 2 [selected_answer]
|
2018/03/15
| 1,732 | 7,239 |
<issue_start>username_0: I have a DataAccessBase class with the following 2 data access methods. One for ExecuteScalar and one for ExecuteNonQuery. Is it possible to consolidate it into one generic method or is even worth worrying about?
```
protected static int ExecuteNonQuery(SqlCommand command)
{
using (SqlConnection connection = new SqlConnection(_connStr))
{
command.Connection = connection;
SqlDataAdapter da = new SqlDataAdapter(command);
command.Connection.Open();
int result = command.ExecuteNonQuery();
return result;
}
}
protected static string ExecuteScalar(SqlCommand command)
{
using (SqlConnection connection = new SqlConnection(_connStr))
{
command.Connection = connection;
SqlDataAdapter da = new SqlDataAdapter(command);
command.Connection.Open();
string result = command.ExecuteScalar().ToString();
return result;
}
}
private static DataTable GetDT(int id)
{
using (SqlConnection connection = new SqlConnection(_connStr))
{
string query = "select id, userid, name from tasks where id = @id";
SqlCommand command = new SqlCommand(query, connection);
SqlDataAdapter da = new SqlDataAdapter(command);
//Parameterized query to prevent injection attacks
command.Parameters.AddWithValue("id", id);
DataTable dt = new DataTable();
da.Fill(dt);
return dt;
}
}
```<issue_comment>username_1: You can definitely avoid the current repetition you've got with a generic method, but I wouldn't try to reduce it to a single method. Here's what I'd *potentially* do:
```
protected static int ExecuteNonQuery(SqlCommand command) =>
ExecuteCommand(command, cmd => cmd.ExecuteNonQuery());
protected static string ExecuteScalar(SqlCommand command) =>
ExecuteCommand(command, cmd => cmd.ExecuteScalar().ToString());
private static T ExecuteCommand(SqlCommand command, Func resultRetriever)
{
using (SqlConnection connection = new SqlConnection(\_connStr))
{
command.Connection = connection;
command.Connection.Open();
return resultRetriver(command);
}
}
```
For the `DataTable` one, following the same pattern you'd create the command first:
```
protected static DataTable GetDataTable(SqlCommand command) =>
ExecuteCommand(cmd =>
{
SqlDataAdapter da = new SqlDataAdapter(cmd)
DataTable table = new DataTable();
da.FillTable(table);
return table;
});
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can convert the ExecuteScalar to a generic method, allowing you to change the return type.
```
public T ExecuteScalar(SqlCommand command)
{
using (SqlConnection connection = new SqlConnection(\_connStr))
{
command.Connection = connection;
//SqlDataAdapter da = new SqlDataAdapter(command); //not needed...
command.Connection.Open();
var result = command.ExecuteScalar();
//rather than just returning result with an implicit cast, use Max's trick from here: https://stackoverflow.com/a/2976427/361842
if (Convert.IsDbNull(result))
return default(T); //handle the scenario where the returned value is null, but the type is not nullable (or remove this to have such scenarios throw an exception)
if (result is T)
return (T)result;
else
(T)Convert.ChangeType(result, typeof(T));
}
}
```
The logic is different in this method to in the `ExecuteNonQuery` function though, so you cannot have both represented by the same method.
---
Update
======
Regarding your question about the data table, I've taken and adapted [@JonSkeet's answer](https://stackoverflow.com/a/49308801/361842) to allow the class to also handle data tables:
```
public class SqlDatabaseThing //: ISqlDatabaseThing
{
// ... additional code here ... //
public int ExecuteNonQuery(SqlCommand command, IEnumerable sqlParameters = new[]{}) =>
ExecuteNonQuery(\_connStr, command, sqlParameters);
public static int ExecuteNonQuery(string connectionString, SqlCommand command, IEnumerable sqlParameters = new[]{}) =>
ExecuteCommand(connectionString, command, cmd => cmd.ExecuteNonQuery());
public T ExecuteScalar(SqlCommand command, IEnumerable sqlParameters = new[]{}) =>
ExecuteScalar(\_connStr, command, sqlParameters);
public static T ExecuteScalar(string connectionString, SqlCommand command, IEnumerable sqlParameters = new[]{}) =>
ExecuteCommand(connectionString, command, cmd => ConvertSqlCommandResult(cmd.ExecuteScalar()));
public DataTable ExecuteToDataTable(SqlCommand command, IEnumerable sqlParameters = new[]{}) =>
ExecuteToDataTable(\_connStr, command, sqlParameters);
public static DataTable ExecuteToDataTable(string connectionString, SqlCommand command, IEnumerable sqlParameters = new[]{}) =>
ExecuteCommand(connectionString, command, cmd => PopulateDataTable(cmd));
private static T ExecuteCommand(string connectionString, SqlCommand command, IEnumerable sqlParameters, Func resultRetriever)
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
command.Parameters.AddRange(sqlParameters);
command.Connection = connection;
command.Connection.Open();
return resultRetriver(command);
}
}
private static DataTable PopulateDataTable(SqlCommand command)
{
var da = SqlDataAdapter(command);
var dt = new DataTable();
da.Fill(dt);
return dt;
}
private static T ConvertSqlCommandResult(object result)
{
if (Convert.IsDbNull(result))
return default(T);
if (result is T)
return result as T;
(T)Convert.ChangeType(result, typeof(T));
}
}
```
NB: In your code you'd included logic related to getting specific tasks. That should be kept separate from your generic database logic (i.e. as presumably you'll want to return data tables for various queries, and don't want to have to rewrite your `GetDT` code each time). As such I've provided additional sample code below showing how you could separate that logic into another class...
```
public class TaskRepository //: IRepository
{
ISqlDatabaseThing db;
public TaskRepository(ISqlDatabaseThing db)
{
this.db = db;
}
readonly string GetByIdCommand = "select id, userid, name from tasks where id = @id";
readonly string GetByIdCommandParameterId = "@id"
readonly SqlDbType GetByIdCommandParameterIdType = SqlDbType.BigInt;
public Task GetById(long id)
{
var command = new SqlCommand(GetByIdCommand);
var parameters = IEnumerableHelper.ToEnumerable(new SqlParameter(GetByIdCommandIdParameter, GetByIdCommandIdParameterType, id));
var dataTable = db.ExecuteToDataTable(command, parameters);
return DataTableToTask(dataTable)[0];
}
private IEnumerable DataTableToTask(DataTable dt)
{
foreach (var row in dt.Rows)
{
yield return DataRowToTask(row);
}
}
private Task DataRowToTask (DataRow dr)
{
return new Task()
{
Id = dr["Id"]
,Name = dr["Name"]
,UserId = dr["UserId"]
};
}
}
public static class IEnumerableHelper
{
public static IEnumerable ToEnumerable(params T[] parameters)
{
return parameters;
}
}
```
NB: This code is untested; any issues please let me know.
Upvotes: 1
|
2018/03/15
| 1,687 | 5,403 |
<issue_start>username_0: I use let's encrypt free SSL (my host provider support it by default),
I checked my site at sslshopper.com (the only warning was: `The certificate is not trusted in all web browsers. You may need to install an Intermediate/chain certificate to link it to a trusted root certificate. Learn more about this error. The fastest way to fix this problem is to contact your SSL provider.`) and <https://www.geocerts.com/ssl_checker>
the result was that my site passed all tests, except `Certificate Chain Complete`. so i don't think the problem is from the certificate, telegram accepts self-signed certificate as i know.
I've tried to use telegram sample bot at <https://core.telegram.org/bots/samples/hellobot>,
after I set webhook URL, I checked my bot at [https://api.telegram.org/bot[my-token]/getWebhookinfo](https://api.telegram.org/bot%5Bmy-token%5D/getWebhookinfo)
the result was:
```
{
"ok": true,
"result": {
"url": "https://itest.gigfa.com/tlg1/tlg1.php",
"has_custom_certificate": false,
"pending_update_count": 17,
"last_error_date": 1521140994,
"last_error_message": "SSL error {337047686, error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed}",
"max_connections": 40
}
}
```
and the bot doesn't work at all.<issue_comment>username_1: Yes, the problem is with your certificate.
The error in your getWebHookInfo:
```
"last_error_message":"SSL error {337047686, error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed}"
```
Is Telegram saying that it needs the whole certificate chain (it's also called CA Bundle or full chained certificate).
### How to check your certificate:
You can use the SSL Labs [SSL Server Test](https://www.ssllabs.com/ssltest/analyze.html) service to check your certificate:
Just pass your URL like the following example, replacing [valde.ci](https://valde.ci/) with your host:
<https://www.ssllabs.com/ssltest/analyze.html?d=valde.ci&hideResults=on&latest>
If you see **"Chain issues: Incomplete"** you do not serve a full chained certificate.
### How to fix:
Download the full chained certificate for your SSL certificate provider and install this on your webserver.
I don't know which service you are using, but for my example, with [gunicorn](http://gunicorn.org/) I solved adding the [ca-certs](http://docs.gunicorn.org/en/stable/settings.html#ca-certs) with `ca-bundle` file sent by my SSL Certificate provider (In my case [Namecheap Comodo](https://www.namecheap.com/security/ssl-certificates/comodo/positivessl.aspx)) on my SSL configuration, like the following example:
```
ca_certs = "cert/my-service.ca-bundle"
```
For further information: [@martini](https://github.com/zammad/zammad/issues/770#issuecomment-310484775) answer on this [thread](https://github.com/zammad/zammad/issues/770) and the [FIX: Telegram Webhooks Not Working](https://www.cubewebsites.com/blog/guides/fix-telegram-webhooks-not-working/) post.
Upvotes: 4 [selected_answer]<issue_comment>username_2: For those who use webmin and Let's Encrypt, My solution after 5 hours:
Download bellow link
[lets-encrypt-r3-cross-signed](https://letsencrypt.org/certs/lets-encrypt-r3-cross-signed.pem)
go to Servers -> Apache Webserver -> your virtual host
[](https://i.stack.imgur.com/Y4Zgc.png)
Inside there set downloaded file into "Certificate authorities file" box:
[](https://i.stack.imgur.com/icws1.png)
It seems there must be a change in ssl check process.
Upvotes: 0 <issue_comment>username_3: In my case I have my own https server running in node.js, and solution was add the .pem file obtained from my SSL provider in the credentials of https server, here the code:
```js
// modules
const fs = require('fs')
const express = require('express')
const https = require('https')
// read files
const cert = fs.readFileSync('./ssl/cert.crt')
const key = fs.readFileSync('./ssl/key.key')
const ca = fs.readFileSync('./ssl/ca.pem')
// set in an object (you must respect the field names)
const credentials = { key, cert, ca }
// https server
const apiApp = express()
// ... your middlewares
const apiAppHttps = https.createServer(credentials, apiApp)
// telegram only supports ports 443, 80, 88, 8443
apiAppHttps.listen(8443, () => {
console.log(`API listen ar port 8443`)
})
```
note that telegram webhook only accepts port 443, 80, 88, 8443, you can get more info about here:
<https://core.telegram.org/bots/faq#im-having-problems-with-webhooks>
Upvotes: 0 <issue_comment>username_4: I had the same issue with my k8s setup, i was using cert-manager to issue self-signed certificates but for some reason it doesn't work, here is how i solved this:
1- Issue the certificate using this command:
```
openssl req -newkey rsa:2048 -sha256 -nodes -keyout PRIVATE.key -x509 -days 365 -out PUBLIC.pem -subj "/C=NG/ST=Lagos/L=Lagos/O=YOUR_ORG_NAME_HERE/CN=PUT_YOUR_DOMAIN_HERE"
```
NOTE: Fill in YOUR\_ORG\_NAME\_HERE and PUT\_YOUR\_DOMAIN\_HERE with your information.
2- base64 encode both files to be stored in k8s secret.
3- Edit certificate end put them in-place:
```
kubectl edit secret [secret-name] -n [namespace]
```
Note: put base64 content of PUBLIC.pem to `ca.crt` and `tls.crt`, and base64 of PRIVATE.key to `tls.key`.
Upvotes: 1
|
2018/03/15
| 748 | 2,637 |
<issue_start>username_0: I am trying to make a game on swift SpriteKit (an RPG more specifically). I am having an issue with making the camera follow my character. So far all that happens is the camera appears off screen and you can't even see the character. Thanks!
```
import SpriteKit
class GameScene: SKScene {
let cam = SKCameraNode()
let james = SKSpriteNode()
override func didMove(to view: SKView) {
self.anchorPoint = .zero
var background = SKSpriteNode(imageNamed: "image-3")
background.size = CGSize(width: 1500, height: 1000);
background.position = CGPoint(x: frame.size.width / 2, y: frame.size.height / 2)
addChild(background)
view.ignoresSiblingOrder = false
self.camera = cam
let james = Player()
james.size = CGSize(width: 40, height: 40)
james.position = CGPoint(x: self.size.width / 2, y: self.size.height / 2)
james.zPosition = 1
self.addChild(james)
}
override func didSimulatePhysics() {
self.camera!.position = james.position
}
}
```<issue_comment>username_1: I don't see any physics happening in the code you pasted. So likely `didSimulatePhysics` isn't getting called. try moving the `self.camera!.position = james.position` to the update function.
>
> edit
>
>
>
a lot of trouble lately seems to stem from the fact that the scene is being paused by default. try setting...
```
self.isPaused = false
```
inside
```
override func didMove(to view: SKView)
```
and putting abreak point on
```
self.camera!.position = james.position
```
to ensure that it is getting called
Upvotes: 0 <issue_comment>username_2: You need to add camera to your scene. I would recommend making camera a child of James so that it always follows James
```
import SpriteKit
class GameScene: SKScene {
let cam = SKCameraNode()
let james = SKSpriteNode()
override func didMove(to view: SKView) {
self.anchorPoint = .zero
var background = SKSpriteNode(imageNamed: "image-3")
background.size = CGSize(width: 1500, height: 1000);
background.position = CGPoint(x: frame.size.width / 2, y: frame.size.height / 2)
addChild(background)
view.ignoresSiblingOrder = false
self.camera = cam
let james = Player()
james.size = CGSize(width: 40, height: 40)
james.position = CGPoint(x: self.size.width / 2, y: self.size.height / 2)
james.zPosition = 1
James.addChild(cam)
self.addChild(james)
}
}
```
Upvotes: 1
|
2018/03/15
| 476 | 1,370 |
<issue_start>username_0: This is the example:
On Sheet 3
```
A B C
1 A
2 13
3 190
```
These numbers are generated based on data from different Sheets. How could I create a formula on sheet3 that uses a range based on these numbers.
Something like: c= SUM Sheet1!(A1&A2:A1&A3) to sum the range A13 to A190 that is on sheet 1.
The reason I want to do this is because the number stored in A3 is dynamically generated.
Please, help.
Thanks in advance.<issue_comment>username_1: try,
```
=sum(indirect(a1&a2&":"&a1&a3))
```
INDEX would be better but A1 should contain 1, not A. Example,
```
=sum(index(A:Z, a2, code(upper(a1))-64):index(A:Z, a3, code(upper(a1))-64))
```
Upvotes: 2 <issue_comment>username_2: Got it!
The way to solve the problem is to add a reference to the name of the other workbook on the same page. Jeeped solution worked great, but I needed more info in the formula.
I changed my data like this:
```html
| | SHEET1 | SHEET1 |
| --- | --- | --- |
| COLUMN | F | G |
| FIRST CELL | 2 | 190 |
| LAST CELL | 160 | 300 |
```
So, the formula ended up like this:
=SUM(INDIRECT("'"&B2&"'!"&B3&B4&":"&B3&B5))
Where B2 was the reference to the cell with the sheet name from which the data was retrieved.
Thank you Jeeped for your help. You were great and I admire your professionalism and speed to point me on the right path.
Upvotes: 0
|
2018/03/15
| 686 | 2,580 |
<issue_start>username_0: ```
window.onload = function() {
var elementts = document.getElementsByClassName('cells');
for(var i = 0; i < elementts.length; i++) {
var show_clicked = elementts[i];
if( show_clicked.onclick==true && show_clicked.length==0){
alert("1");
}else if( show_clicked.onclick==true && show_clicked.length==1){
alert("2");
}else if( show_clicked.onclick==true && show_clicked.length==2){
alert('3');
}else{
}
}
}
```
I am trying to check both element clicked and its number javascript. How do you do that?<issue_comment>username_1: You can check if element has been clicked by attaching eventListener to it. For example:
```
element.addEventListener('click', myFunction);
```
This code willl fire "myFunction" every click on "element".
In your situation you have a list of elements. You want to determinate which element of the list has been just clicked.
One of the ways you can do it is:
1. Select list of elements and assign it to variable (as you did)
2. Convert this list to array
3. Attach an eventListener to parent element of the list and using bubbling check which of the child element is clicked
4. If one of the elements stored in array is clicked, fire the function (in the example below - logs to console index of clicked element in array)
It is not the best solution but I hope it shows the general concept of how to solve this problem. And here is the example implementation:
```
var parentElement = document.getElementById('parentElement')
var elements = document.getElementsByClassName('elements');
var elementsArr = Array.apply(null, elements);
parentElement.addEventListener('click', function(event) {
if (elementsArr.indexOf(event.target > -1)) {
console.log(elementsArr.indexOf(event.target));
}
}, true);
```
Helpful articles:
* Converting a NodeList to an Array:
<http://www.jstips.co/en/javascript/converting-a-node-list-to-an-array/>
* Event Listeners:
<https://developer.mozilla.org/en-US/docs/Web/API/EventListener>
* Bubbling:
[What is event bubbling and capturing?](https://stackoverflow.com/questions/4616694/what-is-event-bubbling-and-capturing)
Upvotes: 1 <issue_comment>username_2: I think window.onload doesn't help you. You need kind of event listener. Just for a simple solution you may try to give consequential id number to your elements which you consider. And you may use below:
```
window.onclick= function(event){alert(event.target.id);}
```
This gives you clicked element's id number.
Upvotes: 0
|
2018/03/15
| 884 | 2,039 |
<issue_start>username_0: I need to delete the columns with more than 50% of zeros and also columns with more than 25% of na. I tried using clean function deleting first the na in the `read.csv(...., na.string="na")`. Then I used `write.csv` and tried to use `read.csv` for the new data file to use the clean function again using `read.csv(...., na.string="0")` but an error comes up saying
>
> ERROR:This data set does not require cleaning.
>
>
>
Is there a way to use apply and execute a function for both zeros and `NA`s?
Sorry I am new to R.<issue_comment>username_1: You can do something like this using `sapply` to directly return indexes of those columns that have `>=50%` valid (i.e. non-null and non-`NA`) entries. Since you don't provide any sample dataset, I am generating a sample `data.frame`.
```
# Sample data
set.seed(2017);
x <- sample(20);
x[sample(1:20, 5)] <- NA;
x[sample(1:20, 5)] <- 0;
df <- as.data.frame(matrix(x, ncol = 4));
df;
# V1 V2 V3 V4
#1 19 0 7 2
#2 0 1 0 NA
#3 9 NA NA 15
#4 NA 16 20 0
#5 0 4 3 NA
# 50% threshold
thresh <- 0.50;
df[, sapply(df, function(x) length(x[!(is.na(x) | x == 0)]) / length(x) >= 0.5)];
# V2 V3
#1 0 7
#2 1 0
#3 NA NA
#4 16 20
#5 4 3
```
Explanation: `x[!(is.na(x) | x == 0)]` selects column entries that are non-null and non-`NA`; we then calculate the fraction of non-null & non-`NA` entries amongst all entries per column, and return indices of those columns that have a fraction `>=0.5`.
Upvotes: 0 <issue_comment>username_2: **data**
```
set.seed(1);
df <- as.data.frame(matrix(sample(c(1,1,0,NA),42,T), ncol = 6));
# V1 V2 V3 V4 V5 V6
# 1 1 0 NA 1 NA 0
# 2 1 0 1 0 1 NA
# 3 0 1 0 1 1 1
# 4 NA 1 NA 1 0 0
# 5 1 1 1 1 1 1
# 6 NA 0 NA 1 1 NA
# 7 NA 1 NA 1 NA 0
```
**solution**
```
df[,colSums(df==0,na.rm = T)/nrow(df) < 0.25 & colSums(is.na(df))/nrow(df) < 0.5]
# V2 V4 V5 V6
# 1 0 1 1 1
# 2 1 0 NA NA
# 3 NA 1 1 NA
# 4 1 1 1 1
# 5 1 NA 1 0
# 6 1 1 NA 1
# 7 1 NA NA 1
```
Upvotes: 1
|
2018/03/15
| 827 | 1,828 |
<issue_start>username_0: I want the price and currency to be centered ? what is the best way to do it ?
```css
.currency {
background: url(../css/images/currency.png);
background-size: 30px;
background-repeat: no-repeat;
background-position-x: center;
background-position-y: center;
}
```
```html
**3**jours
300
```<issue_comment>username_1: You can do something like this using `sapply` to directly return indexes of those columns that have `>=50%` valid (i.e. non-null and non-`NA`) entries. Since you don't provide any sample dataset, I am generating a sample `data.frame`.
```
# Sample data
set.seed(2017);
x <- sample(20);
x[sample(1:20, 5)] <- NA;
x[sample(1:20, 5)] <- 0;
df <- as.data.frame(matrix(x, ncol = 4));
df;
# V1 V2 V3 V4
#1 19 0 7 2
#2 0 1 0 NA
#3 9 NA NA 15
#4 NA 16 20 0
#5 0 4 3 NA
# 50% threshold
thresh <- 0.50;
df[, sapply(df, function(x) length(x[!(is.na(x) | x == 0)]) / length(x) >= 0.5)];
# V2 V3
#1 0 7
#2 1 0
#3 NA NA
#4 16 20
#5 4 3
```
Explanation: `x[!(is.na(x) | x == 0)]` selects column entries that are non-null and non-`NA`; we then calculate the fraction of non-null & non-`NA` entries amongst all entries per column, and return indices of those columns that have a fraction `>=0.5`.
Upvotes: 0 <issue_comment>username_2: **data**
```
set.seed(1);
df <- as.data.frame(matrix(sample(c(1,1,0,NA),42,T), ncol = 6));
# V1 V2 V3 V4 V5 V6
# 1 1 0 NA 1 NA 0
# 2 1 0 1 0 1 NA
# 3 0 1 0 1 1 1
# 4 NA 1 NA 1 0 0
# 5 1 1 1 1 1 1
# 6 NA 0 NA 1 1 NA
# 7 NA 1 NA 1 NA 0
```
**solution**
```
df[,colSums(df==0,na.rm = T)/nrow(df) < 0.25 & colSums(is.na(df))/nrow(df) < 0.5]
# V2 V4 V5 V6
# 1 0 1 1 1
# 2 1 0 NA NA
# 3 NA 1 1 NA
# 4 1 1 1 1
# 5 1 NA 1 0
# 6 1 1 NA 1
# 7 1 NA NA 1
```
Upvotes: 1
|
2018/03/15
| 1,156 | 3,216 |
<issue_start>username_0: I have an ugly list of lists generated in a program,
which looks like this:
```
a= [[93.400000000000006, "high"], [98.600000000000009, 99.0, "high"], [121.30000000000001, 124.1000000000000]]
```
I am saving it to a text file as follows:
```
with open('sample.txt','a') as outfile:
json.dump(a,outfile)
outfile.write("\n\n")
```
When I open the text file, the values saved are an eyesore.
How do I save each list to a new line?
For example if I wanted to print each list to a new line, I could simply do:
```
for i in range(len(a)):
print a[i]
```
thank you
EDIT:
OUTPUT HAS TO LOOK LIKE :
[93.400000000000006, "high"]
[98.600000000000009, 99.0, "high"]
i.e each on one line.<issue_comment>username_1: I really don't any reason here to use [`json.dumps`](https://docs.python.org/3.6/library/json.html#json.dumps). You can just use a normal `for` loop:
```
a = [
[93.400000000000006, "high"],
[98.600000000000009, 99.0, "high"],
[111.60000000000001, 112.5, "high"]
]
with open('sample.txt', 'a') as outfile:
for sublist in a:
outfile.write('{}\n'.format(sublist))
```
The above code produces the output:
```
[93.400000000000006, 'high']
[98.600000000000009, 99.0, 'high']
[111.60000000000001, 112.5, 'high']
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: this should work for you:
```
file=open('sample.txt','w')
for i in a:
file.write(str(a))
file.write("\n\n")
```
Upvotes: 0 <issue_comment>username_3: You can try:
```
with open('sample.txt','a') as outfile:
for item in a:
json.dump(item,outfile)
outfile.write("\n\n")
```
Upvotes: 1 <issue_comment>username_4: If you want the output to be in JSON format, but want the elements of the outer list to appear on separate lines in the file, supply the `indent` keyword argument:
```
with open('sample.txt','w') as outfile:
json.dump(a, outfile, indent=4)
```
Given a number or a whitespace string, `indent` specifies the indentation to use at each level, showing the structure of the data. By default, the output is not pretty-printed at all, to save space. See the [documentation](https://docs.python.org/3/library/json.html#json.JSONEncoder) for details. The same argument works with `json.dumps` to create a string.
Note that the JSON data format is not designed to be "embedded" in other files; you should not use the `'a'` file mode for writing JSON, and should not plan on the file containing anything else besides your one JSON value.
Another option is the [`pprint` standard library module](https://docs.python.org/3/library/pprint.html), if you aren't especially concerned with the data format. `pprint.pprint` will automatically decide how to split up lines based on their length:
```
>>> import pprint
>>> pprint.pprint([[1],[2],[3]])
[[1], [2], [3]]
>>> pprint.pprint([[1],[2],[3]], width=1) # force line-wrapping
[[1],
[2],
[3]]
```
It can print to a file by specifying the `stream` argument.
Finally, it is worth noting the option of iterating over the list and using `json.dump` for each line. This **does not produce valid JSON**, but instead a related data format called JSONL.
Upvotes: 1
|
2018/03/15
| 3,003 | 11,135 |
<issue_start>username_0: I need help understanding a basic implementation of the hello world program in assembly language using the SPARC instruction set. I have the fully functioning code below I just need help understanding how it works. I could not figure out how to post my code with line numbers so I do apologize for any confusion regarding referencing specific lines of code. Any help is very extremely appreciated.
First, the line with the cout .equ ... console data port
and the line with the costat .equ ... console status port. Not sure what this means. It looks like they are assigning the terms "cout" and "costat" to memory addresses 0x0 and 0x4 respectively, but what is a console data port and a console status port, what is the significance of this line of code?
Next, the line with the "sethi" command really confuses me. I know it's related to setting the most significant 22 bits to something but I don't understand what the significance is in this program, what are we accomplishing with this command I really don't understand it at all.
Next, the loop subroutine. It looks like we're loading the contents of register 2 plus the hello world string (defined by a sequence of ASCII characters) and putting it in register 3. I'm not familiar with the HEX notation in the form of 0xnn where n refer to integers. Is this an abbreviated form of standard hexadecimal notation?
next line of the loop it looks like we are adding the contents of register 3 plus zero, and storing the result in register 3. What is the significance of this, why add zero?
last line of the loop is 'be End'. I believe this means "branch if equal and branch to the subroutine called "end" but branch if equal to what? The notes say branch if null, but again, if what is null? I'm not sure what this is referring to.
Next, we have the 'Wait' subroutine which begins with a command to load an unsigned byte at the address of register 4 plus costat (our console status port) and store the result in register 1. Again, what does this mean, what is this instruction doing in the program? By the way when a term is in braces like '[ ]' that is referring to the contents of the memory address right? Or is it referring to the memory address itself. I am constantly confused by this.
Next line we are using "and" with register 1 and another ASCII character and putting the result back in register 1. Maybe this is some sort of punctuation, perhaps the comma in "hello, world!" Again what is this command doing?
Next line is "be wait" which looks like "branch if equal to subroutine 'wait'" Again, branch if equal to what? also why call subroutine 'wait' from within the subroutine, is it a recursive call? What is going on here?
Next line I think is taking the byte in register 3 and storing it in the contents of register 4 plus cout (our console data port). This must have to do with outputting the characters to the console but how is this working, what is taking place in this line of code, why add register 4 to cout?
Next line seems to be incrementing register 2 to the next machine word, possibly relating to the next character in the hello world string.
Lastly, a "branch always" call back to the loop. What is the significance of this, please explain if possible. Thank you
to store a byte
! Prints "Hello, world! \n" in the msgarea.
! SRCTools version: vph 6/29/00, updated rez 4/16/02
! ARCTools version: mww converted 6/17/05
```
.begin
BASE .equ 0x3fffc0 !Starting point of the memory mapped region
COUT .equ 0x0 !0xffff0000 Console Data Port
COSTAT .equ 0x4 !0xffff0004 Console Status Port.
.org 2048
add %r0, %r0, %r2
add %r0, %r0, %r4
sethi BASE, %r4
Loop: ld [%r2 + String], %r3 !Load next char into r3
addcc %r3,%r0,%r3
be End ! stop if null.
Wait: ldub [%r4+COSTAT], %r1
andcc %r1, 0x80, %r1
be Wait
stb %r3, [%r4+COUT] !Print to console
add %r2, 4, %r2 !increment String offset (r2)
ba Loop
End: halt
.org 3000
! The "Hellow, world!" string
String: 0x48, 0x65, 0x6c, 0x6c, 0x6f
0x2c, 0x20, 0x77, 0x6f, 0x72
0x6c, 0x64, 0x21, 0x0a, 0
.end
```<issue_comment>username_1: >
> What is a console data port and a console status port
>
>
>
That depends on your hardware. Apparently your console is memory mapped and uses those addresses for communication.
>
> Next, the line with the "sethi" command really confuses me.
>
>
>
It is used to load `r4` with `0xffff0000` which is the base address for the memory mapped range. As you said, `sethi` only uses 22 bits so you need to shift that address right by 10 bits, which then gives `3fffc0` (the value of `BASE`).
>
> I'm not familiar with the HEX notation in the form of 0xnn where n refer to integers. Is this an abbreviated form of standard hexadecimal notation?
>
>
>
You know leading zeroes can be ignored, right? If I give you $00000100 you won't be a millionaire.
>
> What is the significance of this, why add zero?
>
>
>
The important part is the `cc`. That addition is used to set flags so you can check for zero using the following `be`.
>
> Next line is "be wait" which looks like "branch if equal to subroutine 'wait'
>
>
>
`Wait` is not a subroutine it's just a label. `be` is just looking at the zero flag, set earlier, see previous point and an instruction set reference.
>
> why add register 4 to cout
>
>
>
Because `cout` is just an offset from the start of the memory mapped region which is pointed to by `r4`. You really want to write to `0xffff0000` and that is calculated as `0xffff0000+0`. Of course knowing `COUT` is zero, you could omit the addition.
>
> Lastly, a "branch always" call back to the loop. What is the significance of this, please explain if possible.
>
>
>
You should have understood this part yourself. Obviously it's going back to print the next character.
Upvotes: 2 <issue_comment>username_2: >
> but what is a console data port and a console status port, what is the significance of this line of code?
>
>
>
It would help you to understand how an old fashioned serial port works. Usually there is a hardware register that you write to in order to output a character, and there is a status register that you poll in order to find out when it is safe to write another character of output. The reason for this is that it takes time for the byte of data to be sent out on the serial line bit by bit, and if you write a character too quickly after the previous character, the output will be garbled.
>
> I'm not familiar with the HEX notation in the form of 0xnn where n refer to integers. Is this an abbreviated form of standard hexadecimal notation?
>
>
>
It's not an abbreviated form of hex notation, it is simply hex notation. All hex numbers are of the form 0xnn, and the 'nn' may be any number of hex digits. So 0x48 means hex 48, which in decimal would be 4\*16+8 = 72, and if you look at any chart of ascii characters, you'll see this is the letter 'H'.
>
> next line of the loop it looks like we are adding the contents of register 3 plus zero, and storing the result in register 3. What is the significance of this, why add zero?
>
>
>
The addcc instruction performs an add operation, but also combines it with a test operation. So the machine's condition codes are set according to the result of the add. In this case, we don't really care about adding anything, we just want to perform a test, so zero is added.
>
> last line of the loop is 'be End'. I believe this means "branch if equal and branch to the subroutine called "end" but branch if equal to what? The notes say branch if null, but again, if what is null? I'm not sure what this is referring to.
>
>
>
It refers to the result of the last test operation. Since assembly language does not have a "if-then" instruction, this is how it's done. You test something, and then branch based on the result of the test. So the way to read these two lines of code would be "if r3 equals zero, then branch to End."
>
> Next, we have the 'Wait' subroutine which begins with a command to load an unsigned byte at the address of register 4 plus costat (our console status port) and store the result in register 1. Again, what does this mean, what is this instruction doing in the program? By the way when a term is in braces like '[ ]' that is referring to the contents of the memory address right? Or is it referring to the memory address itself. I am constantly confused by this.
>
>
>
The square brackets always refer to something stored in memory, and the expression within the square brackets give the memory address. So this instruction means "add r4 and COSTAT and use this result as a memory address, then load the (byte) contents of that memory address into r1."
>
> Next line we are using "and" with register 1 and another ASCII character and putting the result back in register 1. Maybe this is some sort of punctuation, perhaps the comma in "hello, world!" Again what is this command doing?
>
>
>
The 'and' instruction performs a bitwise "and" operation between two numbers, in this case the contents of r1 and the hex value 0x80. The result is stored in r1, and the 'cc' specifies that the result (r1) is to be tested and condition codes set. The 0x80 is not an ascii character, but rather a single bit that represents some status flag in the status byte. After you write a character to the serial output register, this status bit will set to zero until it is safe to write another character.
>
> Next line is "be wait" which looks like "branch if equal to subroutine 'wait'" Again, branch if equal to what? also why call subroutine 'wait' from within the subroutine, is it a recursive call? What is going on here?
>
>
>
The "be wait" uses the result of the previous test to determine whether or not to take the branch. So you can read these two instructions as "if r1 & 0x80 then goto wait". These instructions implement a loop which keeps testing this bit forever until the time when it is safe to write another character. The bit is changed by the hardware, and as soon as that happens, the loop will terminate and control will go on to the next instruction.
>
> Next line I think is taking the byte in register 3 and storing it in the contents of register 4 plus cout (our console data port). This must have to do with outputting the characters to the console but how is this working, what is taking place in this line of code, why add register 4 to cout?
>
>
>
r4 represents the "base address" where the serial port registers live, and cout and costat are offsets into that region. So here you are just writing a character to the output register.
>
> Next line seems to be incrementing register 2 to the next machine word, possibly relating to the next character in the hello world string.
>
>
>
That is clearly the intent, however it makes no sense to add 4 to r2 as that will skip over 3 characters. This looks like a bug, and should be just adding 1.
Upvotes: 1
|
2018/03/15
| 3,381 | 12,707 |
<issue_start>username_0: I can't figure out the order my drop tables and create tables should be in. I'm creating a knockoff soccer registration database. Originally I was fine but wanted to make it more complex and since then I can't figure out the order everything is supposed to be in. I know this sounds dumb, but I'm still a student who is trying to create something on his own to learn more. Below is the order my drop tables and create tables are in right now along with my alter tables. I greatly appreciate your guys help!
```
-- --------------------------------------------------------------------------------
-- Drop Tables
-- --------------------------------------------------------------------------------
IF OBJECT_ID('TTeamPlayers') IS NOT NULL DROP TABLE TTeamPlayers
IF OBJECT_ID('TPlayers') IS NOT NULL DROP TABLE TPlayers
IF OBJECT_ID('TTeamCoaches') IS NOT NULL DROP TABLE TTeamCoaches
IF OBJECT_ID('TFields') IS NOT NULL DROP TABLE TFields
IF OBJECT_ID('TAgeGroups') IS NOT NULL DROP TABLE TAgeGroups
IF OBJECT_ID('TReferees') IS NOT NULL DROP TABLE TReferees
IF OBJECT_ID('TCoaches') IS NOT NULL DROP TABLE TCoaches
IF OBJECT_ID('TStates') IS NOT NULL DROP TABLE TStates
IF OBJECT_ID('TSockSizes') IS NOT NULL DROP TABLE TSockSizes
IF OBJECT_ID('TPantSizes') IS NOT NULL DROP TABLE TPantSizes
IF OBJECT_ID('TShirtSizes') IS NOT NULL DROP TABLE TShirtSizes
IF OBJECT_ID('TGenders') IS NOT NULL DROP TABLE TGenders
IF OBJECT_ID('TTeams') IS NOT NULL DROP TABLE TTeams
IF OBJECT_ID('Z_TeamPlayers') IS NOT NULL DROP TABLE Z_TeamPlayers
IF OBJECT_ID('Z_TeamCoaches') IS NOT NULL DROP TABLE Z_TeamCoaches
IF OBJECT_ID('Z_Teams') IS NOT NULL DROP TABLE Z_Teams
IF OBJECT_ID('Z_Players') IS NOT NULL DROP TABLE Z_Players
IF OBJECT_ID('Z_Coaches') IS NOT NULL DROP TABLE Z_Coaches
IF OBJECT_ID('TUsers') IS NOT NULL DROP TABLE TUsers
IF OBJECT_ID('TRoles') IS NOT NULL DROP TABLE TRoles
IF OBJECT_ID('TLogins') IS NOT NULL DROP TABLE TLogins
IF OBJECT_ID('TFieldGames') IS NOT NULL DROP TABLE TFieldGames
------------------------------------------------------------------------------------
-- create tables
------------------------------------------------------------------------------------
create table TTeams
(
intTeamID INTEGER NOT NULL
,strTeam VARCHAR(50) NOT NULL
,CONSTRAINT TTeams_PK PRIMARY KEY ( intTeamID )
)
CREATE TABLE TGenders
(
intGenderID INTEGER NOT NULL
,strGender VARCHAR(10) NOT NULL
,CONSTRAINT TGenders_PK PRIMARY KEY ( intGenderID )
)
CREATE TABLE TShirtSizes
(
intShirtSizeID INTEGER NOT NULL
,strShirtSize VARCHAR(50) NOT NULL
,CONSTRAINT TShirtSizes_PK PRIMARY KEY ( intShirtSizeID )
)
CREATE TABLE TPantSizes
(
intPantSizeID INTEGER NOT NULL
,strPantSize VARCHAR(50) NOT NULL
,CONSTRAINT TPantSizes_PK PRIMARY KEY ( intPantSizeID )
)
CREATE TABLE TSockSizes
(
intSockSizeID INTEGER NOT NULL
,strSockSize VARCHAR(50) NOT NULL
,CONSTRAINT TSockSizes_PK PRIMARY KEY ( intSockSizeID )
)
CREATE TABLE TStates
(
intStateID INTEGER NOT NULL
,strState VARCHAR(50) NOT NULL
,CONSTRAINT TStates_PK PRIMARY KEY ( intStateID )
)
CREATE TABLE TReferees
(
intRefereeID INTEGER NOT NULL
,strFirstName VARCHAR(50) NOT NULL
,strMiddleName VARCHAR(50) NOT NULL
,strLastName VARCHAR(50) NOT NULL
,strEmail VARCHAR(50) NOT NULL
,dtmDateOfBirth DATE NOT NULL
,strCity VARCHAR(50) NOT NULL
,intStateID INTEGER NOT NULL
,strAddress varchar(50) not null
,strZip varchar(50) not null
,strPhoneNumber VARCHAR(50) NOT NULL
,CONSTRAINT TReferees_PK PRIMARY KEY ( intRefereeID )
)
CREATE TABLE TAgeGroups
(
intAgeGroupID INTEGER NOT NULL
,strAge VARCHAR(10) NOT NULL
,CONSTRAINT TAgeGroups_PK PRIMARY KEY ( intAgeGroupID )
)
CREATE TABLE TFields
(
intFieldID INTEGER NOT NULL
,strFieldName VARCHAR(50) NOT NULL
,CONSTRAINT TFields_PK PRIMARY KEY ( intFieldID )
)
CREATE TABLE TCoaches
(
intCoachID INTEGER NOT NULL
,strFirstName VARCHAR(50) NOT NULL
,strMiddleName varchar(50) not null
,strLastName VARCHAR(50) NOT NULL
,intShirtID INTEGER NOT NULL
,dtmDateOfBirth DATE NOT NULL
,strCity Varchar(50) not null
,intStateID integer not null
,strPhoneNumber varchar(50) not null
,strEmail VARCHAR(50) NOT NULL
,strAddress varchar(50) not null
,strZip varchar(50) not null
,CONSTRAINT TCoaches_PK PRIMARY KEY ( intCoachID )
)
CREATE TABLE TTeamCoaches
(
intTeamCoachID INTEGER NOT NULL
,intTeamID INTEGER NOT NULL
,intCoachID INTEGER NOT NULL
,CONSTRAINT TTeamCoaches_PK PRIMARY KEY ( intTeamCoachID )
)
CREATE TABLE TPlayers
(
intPlayerID INTEGER NOT NULL
,strFirstName VARCHAR(50) NOT NULL
,strMiddleName varchar(50) not null
,strLastName VARCHAR(50) NOT NULL
,strEmail VARCHAR(50) NOT NULL
,intShirtSizeID INTEGER NOT NULL
,intPantSizeID INTEGER NOT NULL
,intSockSizeID INTEGER NOT NULL
,strCity VARCHAR(50) NOT NULL
,intStateID INTEGER NOT NULL
,intGenderID INTEGER NOT NULL
,intAgeGroupID INTEGER NOT NULL
,strAddress varchar(50) not null
,strZip varchar(50) not null
,CONSTRAINT TPlayers_PK PRIMARY KEY ( intPlayerID )
)
CREATE TABLE TTeamPlayers
(
intTeamPlayerID INTEGER NOT NULL
,intTeamID INTEGER NOT NULL
,intPlayerID INTEGER NOT NULL
,CONSTRAINT TTeamPlayers_PK PRIMARY KEY ( intTeamPlayerID )
)
CREATE TABLE Z_TeamPlayers
(
intTeamPlayerAuditID INTEGER IDENTITY NOT NULL
,intTeamPlayerID INTEGER NOT NULL
,intTeamID INTEGER NOT NULL
,intPlayerID INTEGER NOT NULL
,UpdatedBy VARCHAR(128) NOT NULL
,UpdatedOn DATETIME NOT NULL
,strAction VARCHAR(1) NOT NULL
,strModified_Reason VARCHAR(1000)
,CONSTRAINT Z_TeamPlayers_PK PRIMARY KEY ( intTeamPlayerAuditID )
)
CREATE TABLE Z_TeamCoaches
(
intTeamCoachAuditID INTEGER IDENTITY NOT NULL
,intTeamCoachID INTEGER NOT NULL
,intTeamID INTEGER NOT NULL
,intCoachID INTEGER NOT NULL
,CONSTRAINT Z_TeamCoaches_PK PRIMARY KEY ( intTeamCoachAuditID )
)
CREATE TABLE Z_Teams
(
intTeamAuditID INTEGER IDENTITY NOT NULL
,intTeamID INTEGER NOT NULL
,strTeam VARCHAR(50) NOT NULL
,UpdatedBy VARCHAR(128) NOT NULL
,UpdatedOn DATETIME NOT NULL
,strAction VARCHAR(1) NOT NULL
,strModified_Reason VARCHAR(1000)
,CONSTRAINT Z_Teams_PK PRIMARY KEY ( intTeamAuditID )
)
CREATE TABLE Z_Players
(
intPlayerAuditID INTEGER IDENTITY NOT NULL
,intPlayerID INTEGER NOT NULL
,strFirstName VARCHAR(50) NOT NULL
,strLastName VARCHAR(50) NOT NULL
,strEmail VARCHAR(50) NOT NULL
,intShirtSizeID INTEGER NOT NULL
,intPantSizeID INTEGER NOT NULL
,intSockSizeID INTEGER NOT NULL
,strCity VARCHAR(50) NOT NULL
,intStateID INTEGER NOT NULL
,intGenderID INTEGER NOT NULL
,intAgeGroupID INTEGER NOT NULL
,UpdatedBy VARCHAR(128) NOT NULL
,UpdatedOn DATETIME NOT NULL
,strAction VARCHAR(1) NOT NULL
,strModified_Reason VARCHAR(1000)
,CONSTRAINT Z_Players_PK PRIMARY KEY ( intPlayerAuditID )
)
CREATE TABLE Z_Coaches
(
intAuditCoachID INTEGER IDENTITY NOT NULL
,intCoachID INTEGER NOT NULL
,strFirstName VARCHAR(50) NOT NULL
,strLastName VARCHAR(50) NOT NULL
,strCity Varchar(50) not null
,intStateID integer not null
,strPhoneNumber varchar(50) not null
,UpdatedBy VARCHAR(128) NOT NULL
,UpdatedOn DATETIME NOT NULL
,strAction VARCHAR(1) NOT NULL
,strModified_Reason VARCHAR(1000)
,CONSTRAINT Z_Coaches_PK PRIMARY KEY ( intAuditCoachID )
)
CREATE TABLE TUsers
(
intUserID integer not null
,intLoginID integer not null
,intRoleID integer not null
,CONSTRAINT TUsers_PK PRIMARY KEY ( intUserID )
)
CREATE TABLE TRoles
(
intRoleID integer not null
,strRole VARCHAR(50) NOT NULL
,CONSTRAINT TRoles_PK PRIMARY KEY ( intRoleID )
)
CREATE TABLE TLogins
(
intLoginID integer not null
,strLoginName varchar(50) not null
,strPassword VARCHAR(50) NOT NULL
,CONSTRAINT TLogins_PK PRIMARY KEY ( intLoginID )
)
CREATE TABLE TFieldGames
(
intFieldGameID INTEGER NOT NULL
,intFieldID INTEGER NOT NULL
,intTeamID INTEGER NOT NULL
,intRefereeID INTEGER NOT NULL
,CONSTRAINT TFieldGames_PK PRIMARY KEY ( intFieldGameID )
)
-- 1
ALTER TABLE TTeamPlayers ADD CONSTRAINT TTeamPlayers_TPlayers_FK
FOREIGN KEY ( intPlayerID ) REFERENCES TPlayers ( intPlayerID )
-- 2
ALTER TABLE TPlayers ADD CONSTRAINT TPlayers_TShirtSizes_FK
FOREIGN KEY ( intShirtSizeID ) REFERENCES TShirtSizes ( intShirtSizeID )
-- 3
ALTER TABLE TPlayers ADD CONSTRAINT TPlayers_TPantSizes_FK
FOREIGN KEY ( intPantSizeID ) REFERENCES TPantSizes ( intPantSizeID )
-- 4
ALTER TABLE TPlayers ADD CONSTRAINT TPlayers_TSockSizes_FK
FOREIGN KEY ( intSockSizeID ) REFERENCES TSockSizes ( intSockSizeID )
-- 5
ALTER TABLE TPlayers ADD CONSTRAINT TPlayers_TStates_FK
FOREIGN KEY ( intStateID ) REFERENCES TStates ( intStateID )
-- 6
ALTER TABLE TPlayers ADD CONSTRAINT TPlayers_TGenders_FK
FOREIGN KEY ( intGenderID ) REFERENCES TGenders ( intGenderID )
-- 7
ALTER TABLE TPlayers ADD CONSTRAINT TPlayers_TAgeGroups_FK
FOREIGN KEY ( intAgeGroupID ) REFERENCES TAgeGroups ( intAgeGroupID )
-- 8
ALTER TABLE TTeamCoaches ADD CONSTRAINT TTeamCoaches_TCoaches_FK
FOREIGN KEY ( intCoachID ) REFERENCES TCoaches ( intCoachID )
-- 9
ALTER TABLE TFieldGames ADD CONSTRAINT TFieldGames_TFields_FK
FOREIGN KEY ( intFieldID ) REFERENCES TFields ( intFieldID )
-- 10
ALTER TABLE TFieldGames ADD CONSTRAINT TFieldGames_TReferees_FK
FOREIGN KEY ( intRefereeID ) REFERENCES TReferees ( intRefereeID )
--11
ALTER TABLE TUsers ADD CONSTRAINT TUsers_TLogins_FK
FOREIGN KEY ( intLoginID ) REFERENCES TLogins ( intLoginID )
-- 12
ALTER TABLE TUsers ADD CONSTRAINT TUsers_TRoles_FK
FOREIGN KEY ( intRoleID ) REFERENCES TRoles ( intRoleID )
-- 13
ALTER TABLE TTeamPlayers ADD CONSTRAINT TTeamPlayers_TTeams_FK
FOREIGN KEY ( intTeamID ) REFERENCES TTeams ( intTeamID )
-- 14
ALTER TABLE TTeamCoaches ADD CONSTRAINT TTeamCoaches_TTeams_FK
FOREIGN KEY ( intTeamID ) REFERENCES TTeams ( intTeamID )
-- 15
ALTER TABLE TCoaches ADD CONSTRAINT TCoaches_TShirtSizes_FK
FOREIGN KEY ( intShirtSizeID ) REFERENCES TShirtSizes ( intShirtSizeID )
--16
ALTER TABLE TCoaches ADD CONSTRAINT TCoaches_TStates_FK
FOREIGN KEY ( intStateID ) REFERENCES TStates ( intStateID )
```<issue_comment>username_1: Since there are no foreign keys in your scheme, the order won't matter. This is what those foreign keys are for: knowing about relations and protect your structure. Without the foreign keys, the database cannot know anything about the structure, not keep you away from destroying the structure. You have to take care of the structure until you add those foreign keys.
Upvotes: 2 <issue_comment>username_2: Since you're learning, the easiest approach is the technique used by SSMS for scripting. First, drop the foreign keys in one batch ([example](https://www.mssqltips.com/sqlservertip/3347/drop-and-recreate-all-foreign-key-constraints-in-sql-server/)). Then drop the tables. Not certain what your focus in learning is at this point, but you should master tsql first before you attempt to script DDL at any level of complexity. Given what you have described, you are likely in over your head and should concentrate on gaining skill/experience using a well-designed database first - e.g., AdventureWorks or World Wide Importers.
Upvotes: 0
|
2018/03/15
| 1,193 | 3,925 |
<issue_start>username_0: I have a java rest service that returns the following json:
Json Output
```
{
"description": "Temperature Sensor",
"symbol": "°C",
"name": "degree celsius",
"maxMeasure": {
"quantity": 12.8,
"name": "degree celsius",
"symbol": "°C",
"dateTime": "2018-03-15T12:38:23"
},
"measure": {
"quantity": 11.1,
"name": "degree celsius",
"symbol": "°C",
"dateTime": "2018-03-15T18:34:27"
},
"minMeasue": {
"quantity": 5.8,
"name": "degree celsius",
"symbol": "°C",
"dateTime": "2018-03-15T04:09:24"
},
"conversionFactor": 1
}
```
through this angular service i call the endpoint :
SensorService.ts
```
getSensor(sensorId: string): Observable {
const headers = new Headers();
headers.append('Content-Type', 'application/json');
headers.append('token', 'token');
headers.append('sensorId', sensorId);
headers.append('unitMeasureId', '1');
const options = new RequestOptions({ headers: headers });
return this.\_http.get(this.\_sensorurl, options)
.map(data => data.json())
.do(data => console.log(data));
}
```
and i map the result in this classes:
Sensor.ts
```
export class Sensor {
description: string;
symbol: string;
name: string;
maxMeasure: Measure;
measure: Measure;
minMeasue: Measure;
conversionFactor: number;
}
```
Measure.ts
```
export class Measure {
quantity: string;
name: string;
symbol: string;
dateTime: string;
}
```
with this statement:
sensor.component.ts
```
ngOnInit(): void {
this._sensor.getSensor(this.route.snapshot.paramMap.get('id'))
.subscribe(sensor => this.sensor = sensor);
}
```
the json is correctly read because by the console I can read it, but when it is mapped on the Sensor object give an undefined output.
where is that wrong?<issue_comment>username_1: Try `return this._http.get(this.\_sensorurl, options);`
Upvotes: 0 <issue_comment>username_2: In service try change :
```
return this._http.get(this._sensorurl, options)
.map(data => data.json())
.do(data => console.log(data));
```
to:
```
return this._http.get(this.\_sensorurl, options)
.map(data => data.json())
.do(data => console.log(data));
```
If you use angular >= 4.3.0 change `http: Http` to `http: HttpClient` (`import { HttpClient } from '@angular/common'` - You do not have to parse to JSON. You will be able to delete `.map(data => data.json())`
Upvotes: 0 <issue_comment>username_3: >
> but when it is mapped on the Sensor object give an undefined output.
>
>
>
You Sensor is **Class**, not an **Interface**:
```
export class Sensor {
description: string;
symbol: string;
name: string;
maxMeasure: Measure;
measure: Measure;
minMeasue: Measure;
conversionFactor: number;
}
```
Add constructor to **Measure** and **Sensor**, since you are using **Class**:
```
export class Sensor {
description: string;
symbol: string;
name: string;
maxMeasure: Measure;
measure: Measure;
minMeasue: Measure;
conversionFactor: number;
// same to Measure class
constructor(data: Sensor|Object){
Object.assign(this, data);
this.assignToClass();
}
assignToClass() {
this.maxMeasure = new Measure(this.maxMeasure);
this.measure = new Measure(this.measure);
this.minMeasue = new Measure(this.minMeasue);
}
}
```
Then, replace in **SensorService.ts**:
```
return this._http.get(this._sensorurl, options)
.map(data => data.json())
.do(data => console.log(data));
```
to:
```
return this._http.get(this._sensorurl, options)
.map(data => new Sensor(data))
.do(data => console.log(data));
```
If you use **class**, you should create instance of it by passing object to constructor. Otherwise, if it **Interface** your mapping `dataObj` will be sufficient
Upvotes: 2 [selected_answer]
|
2018/03/15
| 1,748 | 6,430 |
<issue_start>username_0: I have written some part of the code using react js to develop a website.
I see the event classes data missing in the web page when loading localhost/events.html page.
I am able to see the events classes data in events.html only after browsing a different page from same website. But not on first hit to that page.
Below is the stack trace found in chrome developer tools. I am facing the same issue other browsers as well.
```
Uncaught TypeError: Cannot read property 'map' of undefined
at EventClasses.render (Events.js:24)
at ReactCompositeComponent.js:793
at measureLifeCyclePerf (ReactCompositeComponent.js:73)
at ReactCompositeComponentWrapper._renderValidatedComponentWithoutOwnerOrContext (ReactCompositeComponent.js:792)
at ReactCompositeComponentWrapper._renderValidatedComponent (ReactCompositeComponent.js:819)
at ReactCompositeComponentWrapper.performInitialMount (ReactCompositeComponent.js:359)
at ReactCompositeComponentWrapper.mountComponent (ReactCompositeComponent.js:255)
at Object.mountComponent (ReactReconciler.js:43)
at ReactCompositeComponentWrapper.performInitialMount (ReactCompositeComponent.js:368)
at ReactCompositeComponentWrapper.mountComponent (ReactCompositeComponent.js:255
```
Code for appropriate Events.js file
```
import React, {PureComponent} from 'react';
import marked from 'marked'
import {createStructuredSelector} from 'reselect';
import {connect} from 'react-redux';
import $ from 'jquery'
import moment from 'moment'
import {analytics} from '../../../utils/trackingHelpers';
import {load_contentful} from '../actions';
import {selectEventClasses} from '../selectors';
export class EventClasses extends PureComponent {
componentDidMount(){
const {dispatch, load_contentful} = this.props;
dispatch(load_contentful());
}
render(){
const {events} = this.props;
//Setting values in nextClass field
events.map((e) => {
e.nextClass = {};
var sDate = new Date(e.startTime);
e.startDateTime = sDate;
var options = {weekday: 'short', year: 'numeric', month: 'long', day: 'numeric'};
var sDateStr = sDate.toLocaleDateString('en-US', options);
e.nextClass.day = sDateStr.split(', ')[0];
var dateOptions = {year: 'numeric', month: 'long', day: 'numeric'};
e.nextClass.date = sDate.toLocaleDateString('en-US', dateOptions);
var eDate = new Date(e.endTime);
var dayOptions = {year: 'numeric', month: 'long', day: 'numeric', hour: '2-digit', minute: '2-digit', hc: 'h12'};
var startTime = sDate.toLocaleDateString('en-US', dayOptions);
var endTime = eDate.toLocaleDateString('en-US', dayOptions);
var startTimeArr = startTime.split(' ');
var endTimeArr = endTime.split(' ');
var startTimeStr = startTimeArr[3].length==4?'0'+startTimeArr[3]:startTimeArr[3];
var endTimeStr = endTimeArr[3].length==4?'0'+endTimeArr[3]:endTimeArr[3];
e.nextClass.time = startTimeStr+' '+startTimeArr[4]+' - '+endTimeStr+' '+endTimeArr[4];
e.nextClass.location = e.location;
return e;
});
//Sorting the events array in ascending order of startDateTime
var key = 'startDateTime';
events.sort(function(a, b) {
var x = a[key] == null? "":a[key];
var y = b[key] == null? "":b[key];
return x < y ? -1 : x > y ? 1 : 0;
});
return
{events.map((event, index) =>
{event.title}
---------------
* Introductory Fee: ${event.fee}
* Location: {event.location}
* Instructor: {event.instructor}
{event.allClasses ? See All Class Dates : null}
Info
----
* {event.nextClass.day} | {event.nextClass.date}
* {event.nextClass.time}
* Location: {event.nextClass.location}
[analytics('event', `Classes:click:Book This Class (${event.title} - ${event.nextClass.date}) CTA`)} className="btn btn\_\_secondary btn\_\_lg btn\_\_uppercase event-cta-btn">Book This Class]({event.link})
)
}
}
}
//export default Events;
const mapStateToProps = createStructuredSelector({
events: selectEventClasses()
});
function mapDispatchToProps(dispatch) {
return {
dispatch,
load_contentful
}
}
export default connect(mapStateToProps, mapDispatchToProps)(EventClasses);
```<issue_comment>username_1: Try `return this._http.get(this.\_sensorurl, options);`
Upvotes: 0 <issue_comment>username_2: In service try change :
```
return this._http.get(this._sensorurl, options)
.map(data => data.json())
.do(data => console.log(data));
```
to:
```
return this._http.get(this.\_sensorurl, options)
.map(data => data.json())
.do(data => console.log(data));
```
If you use angular >= 4.3.0 change `http: Http` to `http: HttpClient` (`import { HttpClient } from '@angular/common'` - You do not have to parse to JSON. You will be able to delete `.map(data => data.json())`
Upvotes: 0 <issue_comment>username_3: >
> but when it is mapped on the Sensor object give an undefined output.
>
>
>
You Sensor is **Class**, not an **Interface**:
```
export class Sensor {
description: string;
symbol: string;
name: string;
maxMeasure: Measure;
measure: Measure;
minMeasue: Measure;
conversionFactor: number;
}
```
Add constructor to **Measure** and **Sensor**, since you are using **Class**:
```
export class Sensor {
description: string;
symbol: string;
name: string;
maxMeasure: Measure;
measure: Measure;
minMeasue: Measure;
conversionFactor: number;
// same to Measure class
constructor(data: Sensor|Object){
Object.assign(this, data);
this.assignToClass();
}
assignToClass() {
this.maxMeasure = new Measure(this.maxMeasure);
this.measure = new Measure(this.measure);
this.minMeasue = new Measure(this.minMeasue);
}
}
```
Then, replace in **SensorService.ts**:
```
return this._http.get(this._sensorurl, options)
.map(data => data.json())
.do(data => console.log(data));
```
to:
```
return this._http.get(this._sensorurl, options)
.map(data => new Sensor(data))
.do(data => console.log(data));
```
If you use **class**, you should create instance of it by passing object to constructor. Otherwise, if it **Interface** your mapping `dataObj` will be sufficient
Upvotes: 2 [selected_answer]
|
2018/03/15
| 741 | 1,758 |
<issue_start>username_0: I have an array
```
var arrIn = [
{ 'a': 'a-0', 'b': 'b-0', 'c': 'c-0' },
{ 'a': 'a-1', 'b': 'b-1', 'c': 'c-1' },
{ 'a': 'a-2', 'b': 'b-2', 'c': 'c-2' }
];
```
I need to remove the item `arrIn.a === 'a-1'` with `lodash` and have
```
var arrOut = [
{ 'a': 'a-0', 'b': 'b-0', 'c': 'c-0' },
{ 'a': 'a-2', 'b': 'b-2', 'c': 'c-2' }
];
```
How to do it with Lodash `_.omit()` or `_.omitBy()`?<issue_comment>username_1: A solution is not complicated, but sometimes it takes some time to find it. You are going to use \_.omitBy() and then to transform the object into an array with Object.values():
```
var arrIn = [
{ 'a': 'a-0', 'b': 'b-0', 'c': 'c-0' },
{ 'a': 'a-1', 'b': 'b-1', 'c': 'c-1' },
{ 'a': 'a-2', 'b': 'b-2', 'c': 'c-2' }
];
var arrOut = _.omitBy(arrIn, {'a': 'a-1'});
arrOut = Object.values(arrOut);
console.info('_omit:',{arrIn: arrIn, arrOut:arrOut});
```
You have in console =>
```
[
{ 'a': 'a-0', 'b': 'b-0', 'c': 'c-0' },
{ 'a': 'a-2', 'b': 'b-2', 'c': 'c-2' }
];
```
Upvotes: 2 <issue_comment>username_2: With an array like this, you can also use the lodash `_.filter` method, or the native Array `.filter` method if available (not supported on older browsers).
Example of both:
```js
var arrIn = [
{ 'a': 'a-0', 'b': 'b-0', 'c': 'c-0' },
{ 'a': 'a-1', 'b': 'b-1', 'c': 'c-1' },
{ 'a': 'a-2', 'b': 'b-2', 'c': 'c-2' }
];
// Using Lodash:
var lodashResult = _.filter(arrIn, function(item) { return item.a !== "a-1"; });
console.log("Lodash:", lodashResult);
// Using built-in filter method and es6 arrow syntax:
var filterResult = arrIn.filter(item => item.a !== "a-1");
console.log("Built-in filter:", filterResult);
```
Upvotes: 2
|
2018/03/15
| 661 | 2,036 |
<issue_start>username_0: I have this command:
```
echo -n "cdt_main!@#$" | base64
```
that fails with:
```
bash: !@#$": event not found
```
lulz b/c of the special characters.
So my best guess is that this is correct:
```
echo -n "cdt_main\!\@#\$" | base64
```
at least this is b/c there is no error now. Unfortunately I cannot test the base64 version password until I know that it's right - I have only one chance to get it right o/w something will blow up. **Does that look right to you?**
Using the decoding trick, I have:
```
echo -n "cdt_main\!\@#\$" | base64 | base64 --decode
```
which yields:
```
cdt_main\!\@#$
```
given that output, not sure if this is working as planned, *because slashes are in there*.<issue_comment>username_1: The problem is that bash's history expansion is turned on. Unless you history expansion is something that you actively want and use, the solution is to turn it off.
Observe that this fails with `event not found`:
```
$ echo -n "cdt_main!@#$" | base64
bash: !@#: event not found
```
But, this succeeds:
```
$ set +H
$ echo -n "cdt_main!@#$" | base64
Y2R0X21haW4hQCMk
```
`set +H` turns off history expansion.
### Alternative: use single quotes
For strings inside double quotes, the shell will perform a wide variety of expansions. To prevent that, put strings instead in single quotes.
For example, the following succeeds even with history expansion turned on:
```
$ set -H
$ echo -n 'cdt_main!@#$' | base64
Y2R0X21haW4hQCMk
```
### Portable approach
The `-n` option to `echo` is not supported by all shells. For a more portable solution, one should, as [<NAME>](https://stackoverflow.com/users/89817/gordon-davisson) suggests, use `printf`:
```
printf "%s" 'cdt_main!@#$' | base64
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: @username_1 is right, but an even more generic solution is to use:
```
echo -n 'cdt_main!@#$' | base64
```
single quotes `'` means the characters in the string aren't interpreted by bash, very nice.
Upvotes: 1
|
2018/03/15
| 1,618 | 5,156 |
<issue_start>username_0: I am trying to insert an image in my python application using Canvas in tkinter. The code for the same is:
```
class Welcomepage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
canvas = tk.Canvas(self, width = 1000, height = 1000, bg = 'blue')
canvas.pack(expand = tk.YES, fill = tk.BOTH)
image = tk.PhotoImage(file="ice_mix.gif")
canvas.create_image(480, 258, image = image, anchor = tk.NW)
```
The image is getting read from the source but still not getting displayed in the frame.I am new to the GUI programming someone please help me out.<issue_comment>username_1: The likely issue here is that the image is being garbage collected by Python and therefore not being displayed - which is what @nae's comment is suggesting. Attaching it to the `self` reference will stop it from being garbage collected.
```
self.image = tk.PhotoImage(file="ice_mix.gif") # Use self.image
canvas.create_image(480, 258, image = self.image, anchor = tk.NW)
```
The [Tkinter Book](http://effbot.org/tkinterbook/photoimage.htm) on effbot.org explains this:
>
> Note: When a PhotoImage object is garbage-collected by Python (e.g.
> when you return from a function which stored an image in a local
> variable), the image is cleared even if it’s being displayed by a
> Tkinter widget.
>
>
> To avoid this, the program must keep an extra reference to the image
> object. A simple way to do this is to assign the image to a widget
> attribute, like this:
>
>
>
> ```
> label = Label(image=photo)
> label.image = photo # keep a reference!
> label.pack()
>
> ```
>
>
Upvotes: 4 <issue_comment>username_2: 1. If You still need to use a **Canvas** instead a **Label** for image placement inside a function or a method You can use an external link for image, and use a **global** specificator for this link inside a function.
2. You may need to use SE anchor, not NW.
This code works successfully (gets an OpenCV image from USB-camera and place it in a Tkinter **Canvas**):
```
def singleFrame1():
global imageTK # declared previously in global area
global videoPanel1 # also global declaration (initialized as "None")
videoCapture=cv2.VideoCapture(0)
success,frame=videoCapture.read()
videoCapture.release()
vHeight=frame.shape[0]
vWidth=frame.shape[1]
imageRGB=cv2.cvtColor(frame,cv2.COLOR_BGR2RGB) # OpenCV RGB-image
imagePIL=Image.fromarray(imageRGB) # PIL image
imageTK=ImageTk.PhotoImage(imagePIL) # Tkinter PhotoImage
if videoPanel1 is None:
videoPanel1=Canvas(root,height=vHeight,width=vWidth) # root - a main Tkinter object
videoPanel1.create_image(vWidth,vHeight,image=imageTK,anchor=SE)
videoPanel1.pack()
else:
videoPanel1.create_image(vWidth,vHeight,image=imageTK,anchor=SE)
```
Upvotes: 1 <issue_comment>username_3: About
=====
Seems like a program that showcase a GIF. I think it is better to "fix" the code for you including step by step tutorial since you look like a beginner.
Fix
===
Okay, so here is your code.
```py
class Welcomepage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
canvas = tk.Canvas(self, width = 1000, height = 1000, bg = 'blue')
canvas.pack(expand = tk.YES, fill = tk.BOTH)
image = tk.PhotoImage(file="ice_mix.gif")
canvas.create_image(480, 258, image = image, anchor = tk.NW)
```
First of all, you never noticed the fact that you didn't add pass to the first `class`.
```py
class Welcomepage(tk.Frame):
pass
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
canvas = tk.Canvas(self, width = 1000, height = 1000, bg = 'blue')
canvas.pack(expand = tk.YES, fill = tk.BOTH)
image = tk.PhotoImage(file="ice_mix.gif")
canvas.create_image(480, 258, image = image, anchor = tk.NW)
```
The second fact is that since you used the `double quote` you should be keep using them, because then you can be a great developer. I experienced such a mess as a Python stack dev :(
```py
class Welcomepage(tk.Frame):
pass
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
canvas = tk.Canvas(self, width = 1000, height = 1000, bg = "blue")
canvas.pack(expand = tk.YES, fill = tk.BOTH)
image = tk.PhotoImage(file="ice_mix.gif")
canvas.create_image(480, 258, image = image, anchor = tk.NW)
```
And try using the following code for your line 8~9.
```py
self.image = tk.PhotoImage(file="ice_mix.gif")
canvas.create_image(480, 258, image = self.image, anchor = tk.NW)
```
I've also noticed that instead of your space, it is supposed to be this.
```py
class Welcomepage(tk.Frame):
pass
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
canvas = tk.Canvas(self, width = 1000, height = 1000, bg = "blue")
canvas.pack(expand = tk.YES, fill = tk.BOTH)
self.image = tk.PhotoImage(file="ice_mix.gif")
canvas.create_image(480, 258, image = image, anchor = tk.NW)
```
>
> Hope you enjoyed and have a nice day!
>
>
>
Upvotes: 0
|
2018/03/15
| 1,340 | 4,529 |
<issue_start>username_0: I'm a java learner and I have a task to do from a forum to learn more.
The challenge is receive a number and multiply it by it's own digit like:
input `999`
output `9\*9\*9 = 729 then 7\*2\*9 =126 again 1\*2\*6 = 12 finally 1\*2=2 until it hits only one single digit.
I got this issue which I ask a variable to add the multiplication of an array of length of 2 it returns me this
```
1------------------1
49-----------------2
final result 2450
```
And this is the code..
```
class Persist {
public static void persistence(long n) {
String number = String.valueOf((int)n);
char[] digits1 = number.toCharArray();
int value = 1;
for(int i = 0 ;i <= digits1.length -1;i++) {
System.out.println(value + "-----------------" + digits1[i]);
value = value* (int)digits1[i];
}
System.out.println((int)value);
}
public static void main(String [] args) {
persistence(12);
}
}
```
I can try to fix this but I'm interested to know whats wrong.Thank you all in advanced for the help and just by passing by.<issue_comment>username_1: You are using the ASCII values of the numbers (see <https://en.wikipedia.org/wiki/ASCII>) i.e. 1=49 and 2=50
49 \* 50 = 2450
You can use the Character.getNumericValue to get the numerical value of the char instead, i.e.
```
class Persist {
public static void persistence(long n) {
String number = String.valueOf((int)n);
char[] digits1 = number.toCharArray();
int value = 1;
for(int i = 0 ;i <= digits1.length -1;i++) {
System.out.println(value + "-----------------" + digits1[i]);
value = value* Character.getNumericValue((int)digits1[i]);
}
System.out.println((int)value);
}
public static void main(String [] args) {
persistence(12);
}
```
Upvotes: 1 <issue_comment>username_2: It is easiest to understand if you look at the values with a debugger. During the actual calculation `value = value * (int)digits1[i]` you are not using the value of the digits but the ASCII value of the chars. This is 1->49 and 2->50 so you are calculating 49\*50=2450.
Change the line to `value = value* Character.getNumericValue(digits1[i]);` and you get what you are looking for.
Upvotes: 0 <issue_comment>username_3: Here's how I would do this. I adjusted it so that a 0 value digit is converted to 1. You can comment that out (did this because any digit of 0 turns the answer into 0). This version will keep re-multiplying until it's down to one digit.
```
public class Persist {
public static void main(String [] args) {
int answer = persistence(234);
System.out.println(" length: " + String.valueOf(answer).length());
while ( String.valueOf(answer).length() > 1) {
answer = persistence(answer);
} }
static int persistence(int n) {
int value = 1;
String myStr = String.valueOf(n);
int myStrLen = myStr.length();
int[] finalIntAr = new int[myStrLen];
for (int v=0; v< myStrLen; v++) {
String subS= myStr.substring(v,v+1);
System.out.println("the char/int : " + subS);
Integer bigI = Integer.valueOf(subS);
if (bigI == 0) { bigI = 1; } // take this out if you want 0 to perform as 0
finalIntAr[v] = bigI.intValue();
}
for (int i = 0 ; i < myStrLen; i++) {
System.out.println(" ----- first= " + value + " multiplied by : " + finalIntAr[i]);
value = value * finalIntAr[i];
}
System.out.println("\n CURRENT FINAL VALUE *** : " + value);
return value;
} }
```
Output:
```
the char/int : 2
the char/int : 3
the char/int : 4
----- first= 1 multiplied by : 2
----- first= 2 multiplied by : 3
----- first= 6 multiplied by : 4
CURRENT FINAL VALUE *** : 24
length: 2
the char/int : 2
the char/int : 4
----- first= 1 multiplied by : 2
----- first= 2 multiplied by : 4
CURRENT FINAL VALUE *** : 8
```
Upvotes: 0 <issue_comment>username_4: Since nobody posted an answer using no strings or chars, I will. Technically not an answer to your question, though.
```
public static void persistence(long x)
{
long answer = 1;
while (x != 0) {
long onesDigit = x % 10;
answer *= onesDigit;
x /= 10;
}
System.out.println(answer);
}
```
Upvotes: 0
|
2018/03/15
| 721 | 2,594 |
<issue_start>username_0: I am trying to read a file, line by line in OCaml. Each line in the file represents a string I want to parse, in the correct format expected by the Parsing tool. I am saving each line in a list structure.
I an finding an issue parsing the string contained in each element of the list. I am using OCamllex and Menhir as parsing tools.
* If I try to use `print_string` to print the contents of the list at every element, I get the correct file contents.
* If I try to pass a `string` that I defined within the program to the function, then I get the desired output.
* However, if I try to parse the string which I have just read from the file, I get an error: `Fatal error: exception Failure ("lexing empty token")`
Note: that all of this has been tested against the same string.
Here is a snippet of the code:
```
let parse_mon m = Parser.monitor Lexer.token (from_string m)
let parse_and_print (mon: string)=
print_endline (print_monitor (parse_mon mon) 0)
let get_file_contents file =
let m_list = ref [] in
let read_contents = open_in file in
try
while true; do
m_list := input_line read_contents :: !m_list
done; !m_list
with End_of_file -> close_in read_contents; List.rev !m_list
let rec print_file_contents cont_list = match cont_list with
| [] -> ()
| m::ms -> parse_and_print m
let pt = print_file_contents (get_file_contents filename)
```<issue_comment>username_1: Without seeing your grammar and file I can only offer a wild guess: Could it be that the file contains an empty line at the end? Depending on the `.mll` that might result in the error you see. The reason being that `get_file` appends new lines to the front of the list and `print_file_contents` only looks at the head of that list.
Upvotes: 1 <issue_comment>username_2: I agree with **kne**, hard to say without seeing the file, but what you can do is trying to isolate the line that causes the trouble by doing :
```
let rec print_file_contents cont_list =
match cont_list with
| [] -> ()
| m::ms ->
try parse_and_print m
with Failure _ -> print_string m
```
Upvotes: 1 <issue_comment>username_3: Ocamllex throws an exception `Failure "lexing: empty token"` when a text in the stream doesn't match any scanner pattern. Therefore, you will need to match with "catch-all" patterns such as `.`, `_`, or `eof`.
```
{ }
rule scan = parse
| "hello" as w { print_string w; scan lexbuf }
(* need these two for catch-all *)
| _ as c { print_char c; scan lexbuf }
| eof { exit 0 }
```
Upvotes: 2
|
2018/03/15
| 1,722 | 4,813 |
<issue_start>username_0: I have few arrays of JSON objects.I need to iterate over the arrays and return true if there are two or more elements with the same userId value.
```
[{
"name":"John",
"age":30,
"userId": 5,
}],
[{
"name":"Benjamin",
"age":17,
"userId": 5,
}],
[{
"name":"Johnatan",
"age":35,
"userId": 10,
}]
```
Here is my method so far, I'm iterating over the array and checking is there a user with 506 userId presence.
```
isPostedMultiple = (data) => {
for (let j = 0; j < data.length; j++) {
if (data[j].UserId == '506') {
console.log('506 data :', data[j]);
} else {
console.log('not 506 data');
}
}
}
```<issue_comment>username_1: First of all the Object you have given is erroneous. Make it correct. Coming to the problem,
You can use a combination of [`Array.prototype.some`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/some) and [`Array.prototype.filter`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter).
```
data.some(
(el, i, arr) => arr.filter(_el => _el.userId == el.userId).length > 1
);
```
To check if there exists more than one element matching certain condition.
```js
var data = [{
"name": "John",
"age": 30,
"userId": 5,
},
{
"name": "Benjamin",
"age": 17,
"userId": 5,
},
{
"name": "Johnatan",
"age": 35,
"userId": 10,
}
];
var result = data.some(
(el, i, arr) => arr.filter(_el => _el.userId == el.userId).length > 1
);
console.log(result)
```
Upvotes: 2 <issue_comment>username_2: You can merge arrays using array spread syntax and than use the `reduce` with the `filter` method
```
const mergedArrays = [...arr1, ...arr2, ...arr3];
const isDublicated = mergedArrays.reduce(
(acc, item) => acc || mergedArrays.filter(user => user.userId === item.userId) > 1,
false
);
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: To achieve expected result, use below option of using filter and findIndex to iterate over every array and compare userId
```js
var x = [[{
"name":"John",
"age":30,
"userId": 5,
}],
[{
"name":"Benjamin",
"age":17,
"userId": 5,
}],
[{
"name":"Johnatan",
"age":35,
"userId": 10,
}]]
x = x.filter((v, i, self) =>
i === self.findIndex((y) => (
y[0].userId === v[0].userId
))
)
console.log(x);
```
code sample - <https://codepen.io/nagasai/pen/wmWqdY?editors=1011>
Upvotes: 0 <issue_comment>username_4: ```js
var jsonObj1 = [{
"name":"John",
"age":30,
"userId": 5
},
{
"name":"Benjamin",
"age":17,
"userId": 5
},
{
"name":"Johnatan",
"age":35,
"userId": 10
}];
var jsonObj2 = [{
"name":"John",
"age":30,
"userId": 5
},
{
"name":"Benjamin",
"age":17,
"userId": 15
},
{
"name":"Johnatan",
"age":35,
"userId": 10
}];
var logger = document.getElementById('logger');
logger.innerHTML = "";
function checkForDupIds(jsonObj, headerStr) {
var logger = document.getElementById('logger');
var hasDups = [];
var items = [];
for(var a=0;a";
for(var b=0;b\n";
console.log(hasDups[b]);
}
if (hasDups.length === 0) {
logger.innerHTML += "No Duplicates Found\n";
}
}
checkForDupIds(jsonObj1, "jsonObj1");
checkForDupIds(jsonObj2, "jsonObj2");
```
```html
```
Upvotes: 0 <issue_comment>username_5: You can loop over the array and keep a count of how many times each userId value appears. If you get to 2 for any value, stop and return false (or some other suitable value).
[*Array.prototype.some*](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/some) allows looping over the array until the condition is true, so it only loops over the source once. The data in the OP was invalid, I've modified it to be an array of objects.
```js
var data = [{
"name":"John",
"age":30,
"userId": 5
},
{
"name":"Benjamin",
"age":17,
"userId": 5
},
{
"name":"Johnatan",
"age":35,
"userId": 10
}]
function hasDupIDs(data) {
// Store for userId values
var ids = {};
// Loop over values until condition returns true
return data.some(function(x) {
// If haven't seen this id before, add to ids
if (!ids.hasOwnProperty(x.userId)) ids[x.userId] = 0;
// Increment count
ids[x.userId]++;
// Return true if second instance
return ids[x.userId] > 1;
});
}
console.log(hasDupIDs(data));
```
If you want more concise code, you can use:
```js
var data = [
{"name":"John","age":30,"userId": 5},
{"name":"Benjamin","age":17,"userId": 5},
{"name":"Johnatan","age":35,"userId": 10}];
function hasDupIDs(data) {
var ids = {};
return data.some(x => {
ids[x.userId] || (ids[x.userId] = 0);
return ++ids[x.userId] > 1;
});
}
console.log(hasDupIDs(data));
```
Upvotes: 0
|
2018/03/15
| 1,985 | 5,492 |
<issue_start>username_0: I have a workbook with two worksheets (w1 and w2)
There are two columns in the middle of the w1, **col1 looks like this**:
```
Sample No.
BB01_1_6 6
BB01_1_6 12
BB01_1_7 6
BB01_1_7 12
BB02_1_9 6
BB02_1_9 12
```
**col2 looks like this:**
```
Results
8.8
10.1
8.9
6.8
7.9
8.4
```
>
> I would like the worksheet2 (w2) to look like this:
>
>
>
```
Sample|ID|Serial|Mold6|Mold12
BB01 |1 |6 |8.8 |10.1
BB01 |1 |7 |8.9 |6.8
BB02 |1 |9 |7.9 |8.4
```
So I want to get all the sample# for 6 in one array and for 12 in another and another two with the Results.
I would then like to loop through one array and just print the first index value of all arrays
**Here is what I have done so far:**
```
Dim rng6 As Range
Dim rng12 As Range
Dim contra As Range
With Sheets("w1")
Set contra = .Range(.Range("J18"), .Range("J18").End(xlDown))
End With
For Each c In contra
If Right(c.Text, 10) = "6" Then
Set rng6 = c
Else
Set rng12 = c
End If
Next
```
It does not go through the loop.
Where am I going wrong and what is the best way to do this? I have given as much information as I thought was appropriate, but if you need more, let me know.<issue_comment>username_1: First of all the Object you have given is erroneous. Make it correct. Coming to the problem,
You can use a combination of [`Array.prototype.some`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/some) and [`Array.prototype.filter`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter).
```
data.some(
(el, i, arr) => arr.filter(_el => _el.userId == el.userId).length > 1
);
```
To check if there exists more than one element matching certain condition.
```js
var data = [{
"name": "John",
"age": 30,
"userId": 5,
},
{
"name": "Benjamin",
"age": 17,
"userId": 5,
},
{
"name": "Johnatan",
"age": 35,
"userId": 10,
}
];
var result = data.some(
(el, i, arr) => arr.filter(_el => _el.userId == el.userId).length > 1
);
console.log(result)
```
Upvotes: 2 <issue_comment>username_2: You can merge arrays using array spread syntax and than use the `reduce` with the `filter` method
```
const mergedArrays = [...arr1, ...arr2, ...arr3];
const isDublicated = mergedArrays.reduce(
(acc, item) => acc || mergedArrays.filter(user => user.userId === item.userId) > 1,
false
);
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: To achieve expected result, use below option of using filter and findIndex to iterate over every array and compare userId
```js
var x = [[{
"name":"John",
"age":30,
"userId": 5,
}],
[{
"name":"Benjamin",
"age":17,
"userId": 5,
}],
[{
"name":"Johnatan",
"age":35,
"userId": 10,
}]]
x = x.filter((v, i, self) =>
i === self.findIndex((y) => (
y[0].userId === v[0].userId
))
)
console.log(x);
```
code sample - <https://codepen.io/nagasai/pen/wmWqdY?editors=1011>
Upvotes: 0 <issue_comment>username_4: ```js
var jsonObj1 = [{
"name":"John",
"age":30,
"userId": 5
},
{
"name":"Benjamin",
"age":17,
"userId": 5
},
{
"name":"Johnatan",
"age":35,
"userId": 10
}];
var jsonObj2 = [{
"name":"John",
"age":30,
"userId": 5
},
{
"name":"Benjamin",
"age":17,
"userId": 15
},
{
"name":"Johnatan",
"age":35,
"userId": 10
}];
var logger = document.getElementById('logger');
logger.innerHTML = "";
function checkForDupIds(jsonObj, headerStr) {
var logger = document.getElementById('logger');
var hasDups = [];
var items = [];
for(var a=0;a";
for(var b=0;b\n";
console.log(hasDups[b]);
}
if (hasDups.length === 0) {
logger.innerHTML += "No Duplicates Found\n";
}
}
checkForDupIds(jsonObj1, "jsonObj1");
checkForDupIds(jsonObj2, "jsonObj2");
```
```html
```
Upvotes: 0 <issue_comment>username_5: You can loop over the array and keep a count of how many times each userId value appears. If you get to 2 for any value, stop and return false (or some other suitable value).
[*Array.prototype.some*](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/some) allows looping over the array until the condition is true, so it only loops over the source once. The data in the OP was invalid, I've modified it to be an array of objects.
```js
var data = [{
"name":"John",
"age":30,
"userId": 5
},
{
"name":"Benjamin",
"age":17,
"userId": 5
},
{
"name":"Johnatan",
"age":35,
"userId": 10
}]
function hasDupIDs(data) {
// Store for userId values
var ids = {};
// Loop over values until condition returns true
return data.some(function(x) {
// If haven't seen this id before, add to ids
if (!ids.hasOwnProperty(x.userId)) ids[x.userId] = 0;
// Increment count
ids[x.userId]++;
// Return true if second instance
return ids[x.userId] > 1;
});
}
console.log(hasDupIDs(data));
```
If you want more concise code, you can use:
```js
var data = [
{"name":"John","age":30,"userId": 5},
{"name":"Benjamin","age":17,"userId": 5},
{"name":"Johnatan","age":35,"userId": 10}];
function hasDupIDs(data) {
var ids = {};
return data.some(x => {
ids[x.userId] || (ids[x.userId] = 0);
return ++ids[x.userId] > 1;
});
}
console.log(hasDupIDs(data));
```
Upvotes: 0
|
2018/03/15
| 1,318 | 4,745 |
<issue_start>username_0: I consistently run into compiler errors where I forget to put the opening brace for a `#pragma omp critical` section on the line following the statement, instead of on the same line:
```
#pragma omp parallel
{
static int i = 0;
// this code fails to compile
#pragma omp critical {
i++;
}
// this code compiles fine
#pragma omp critical
{
i++;
}
}
```
My question is, why can't the compiler parse the braces on the same line? It can do this for any C++ syntax. Why should white space matter for the OpenMP `#pragma` statements, when it does not in C++?<issue_comment>username_1: According to cppreference:
>
> The preprocessing directives control the behavior of the preprocessor. **Each directive occupies one line** and has the following format:
>
>
> 1. `#` character
> 2. preprocessing instruction (one of `define`, `undef`, `include`, `if`, `ifdef`, `ifndef`, `else`, `elif`, `endif`, `line`, `error`, `pragma`)
> 3. arguments (depends on the instruction)
> 4. line break.
>
>
> The null directive (# followed by a line break) is allowed and has no effect.
>
>
>
So the **pre-processor**, not the compiler, reads the entire line as a directive. Unlike the compiler, which does not care about line breaks.
Source: <http://en.cppreference.com/w/cpp/preprocessor>
Upvotes: 3 <issue_comment>username_2: Because it does.
To say that whitespace "never matters" in any C++ construct is foolhardy. For example, the following pieces of code are not the same, and I don't believe that anyone would expect them to be:
### 1.
```
int x = 42;
```
### 2.
```
intx=42;
```
It is more true to say that newlines and space characters are generally treated in the same way, but that's still not quite right. Like:
### 3.
```
void foo() // a comment
{
```
### 4.
```
void foo() // a comment {
```
Of course, in this case, the reason the snippets aren't the same is because `//` takes effect until the end of the line.
But **so does `#`**.
Both constructs are resolved by the *preprocessor*, not by the compiler, and the preprocessor works in *lines*. It is not until later in the build process that more complex parsing takes place. This is logical, consistent, predictable, and practical. All syntax highlighters expect it to work that way.
Could the preprocessor be modified to treat a `{` at the end of a preprocessor directive as if it were written on the next line? Sure, probably. But it won't be.
Thinking purely about this actual example, the range of acceptable parameters to a `#pragma` is implementation defined (in fact this is the whole point of the `#pragma` directive), so it is literally not possible for the C++ standard to define a more complex set of semantics for it than "use the whole line, whatever's provided". And, without the C++ standard guiding it, such logic would potentially result in the same source code meaning completely different things on completely different compilers. No thanks!
Upvotes: 3 [selected_answer]<issue_comment>username_3: >
> My question is, why can't the compiler parse the braces on the same line? It can do this for any C++ syntax. Why should white space matter for the OpenMP #pragma statements, when it does not in C++?
>
>
>
There are two standards which defines what compiler can: C/C++ language standard and [OpenMP specification](http://www.openmp.org/specifications/). C/C++ [specification of OpenMP says (chapter 2, 10-11; 2.1 7)](http://www.openmp.org/wp-content/uploads/openmp-4.5.pdf):
>
> In C/C++, OpenMP directives are specified by using the #pragma mechanism provided by the C and C++ standards.
>
>
> 2.1 Directive Format
> The syntax of an OpenMP directive is as follows:
>
>
>
> ```
> #pragma omp directive-name [clause[ [,] clause] ... ] new-line
>
> ```
>
>
So, new line is required by OpenMP (and by syntax of `#pragma` in C and C++ standards - while they look like preprocessor directive, they are actually compiler directives).
But if you want to use OpenMP pragmas in places where newlines are prohibited (inside macro directive) or you want to place `{` on the same line, sometimes there is alternative: `_Pragma` (from C99 standard and from [C++11 standard](http://en.cppreference.com/w/cpp/preprocessor/impl)) or non-standard MS-specific `__pragma`: [Difference between #pragma and \_Pragma() in C](https://stackoverflow.com/questions/45477355/difference-between-pragma-and-pragma-in-c/45477506) and <https://gcc.gnu.org/onlinedocs/cpp/Pragmas.html>
```
_Pragma("omp parallel for")
_Pragma("omp critical")
```
This variant may work in some C++ compilers and may not work in another; it also depends on language options of the compilation process.
Upvotes: 2
|
2018/03/15
| 1,546 | 5,536 |
<issue_start>username_0: i am trying to implement a circular queue in python and my current program keeps giving me error, and I would like to know the problem and solve it.
my current program is :
```
# circular queue
class circularQueue:
def __init__(self, maxsize):
self.front = 0
self.rear = -1
self.queue = []
self.size = 0 # elements in the queue
self.maxsize = maxsize #size of the array(queue)
def isEmpty(self):
if self.size == 0:
return True
else:
return False
def isFull(self):
if self.size == self.maxsize:
return True
else:
return False
def enQueue(self, newItem):
if self.size == self.maxsize:
print('Queue Full')
else:
self.rear = (self.rear + 1) % self.maxsize # mod = remainder
self.queue[self.rear] = newItem
self.size += self.size
def deQueue(self):
if self.size == 0:
print('Queue Empty')
item = null
else:
item = self.queue[self.front]
self.front = (self.front + 1) % self.maxsize
self.size = self.size - 1
return item
```
and if i try to operate
```
q = circularQueue(6)
q.enQueue('k')
```
it says
```
self.queue[self.rear] = newItem
IndexError: list assignment index out of range
```
Please help me ......<issue_comment>username_1: According to cppreference:
>
> The preprocessing directives control the behavior of the preprocessor. **Each directive occupies one line** and has the following format:
>
>
> 1. `#` character
> 2. preprocessing instruction (one of `define`, `undef`, `include`, `if`, `ifdef`, `ifndef`, `else`, `elif`, `endif`, `line`, `error`, `pragma`)
> 3. arguments (depends on the instruction)
> 4. line break.
>
>
> The null directive (# followed by a line break) is allowed and has no effect.
>
>
>
So the **pre-processor**, not the compiler, reads the entire line as a directive. Unlike the compiler, which does not care about line breaks.
Source: <http://en.cppreference.com/w/cpp/preprocessor>
Upvotes: 3 <issue_comment>username_2: Because it does.
To say that whitespace "never matters" in any C++ construct is foolhardy. For example, the following pieces of code are not the same, and I don't believe that anyone would expect them to be:
### 1.
```
int x = 42;
```
### 2.
```
intx=42;
```
It is more true to say that newlines and space characters are generally treated in the same way, but that's still not quite right. Like:
### 3.
```
void foo() // a comment
{
```
### 4.
```
void foo() // a comment {
```
Of course, in this case, the reason the snippets aren't the same is because `//` takes effect until the end of the line.
But **so does `#`**.
Both constructs are resolved by the *preprocessor*, not by the compiler, and the preprocessor works in *lines*. It is not until later in the build process that more complex parsing takes place. This is logical, consistent, predictable, and practical. All syntax highlighters expect it to work that way.
Could the preprocessor be modified to treat a `{` at the end of a preprocessor directive as if it were written on the next line? Sure, probably. But it won't be.
Thinking purely about this actual example, the range of acceptable parameters to a `#pragma` is implementation defined (in fact this is the whole point of the `#pragma` directive), so it is literally not possible for the C++ standard to define a more complex set of semantics for it than "use the whole line, whatever's provided". And, without the C++ standard guiding it, such logic would potentially result in the same source code meaning completely different things on completely different compilers. No thanks!
Upvotes: 3 [selected_answer]<issue_comment>username_3: >
> My question is, why can't the compiler parse the braces on the same line? It can do this for any C++ syntax. Why should white space matter for the OpenMP #pragma statements, when it does not in C++?
>
>
>
There are two standards which defines what compiler can: C/C++ language standard and [OpenMP specification](http://www.openmp.org/specifications/). C/C++ [specification of OpenMP says (chapter 2, 10-11; 2.1 7)](http://www.openmp.org/wp-content/uploads/openmp-4.5.pdf):
>
> In C/C++, OpenMP directives are specified by using the #pragma mechanism provided by the C and C++ standards.
>
>
> 2.1 Directive Format
> The syntax of an OpenMP directive is as follows:
>
>
>
> ```
> #pragma omp directive-name [clause[ [,] clause] ... ] new-line
>
> ```
>
>
So, new line is required by OpenMP (and by syntax of `#pragma` in C and C++ standards - while they look like preprocessor directive, they are actually compiler directives).
But if you want to use OpenMP pragmas in places where newlines are prohibited (inside macro directive) or you want to place `{` on the same line, sometimes there is alternative: `_Pragma` (from C99 standard and from [C++11 standard](http://en.cppreference.com/w/cpp/preprocessor/impl)) or non-standard MS-specific `__pragma`: [Difference between #pragma and \_Pragma() in C](https://stackoverflow.com/questions/45477355/difference-between-pragma-and-pragma-in-c/45477506) and <https://gcc.gnu.org/onlinedocs/cpp/Pragmas.html>
```
_Pragma("omp parallel for")
_Pragma("omp critical")
```
This variant may work in some C++ compilers and may not work in another; it also depends on language options of the compilation process.
Upvotes: 2
|
2018/03/15
| 569 | 1,608 |
<issue_start>username_0: I'm using python regex engine and trying to achieve something like for string `foo,fou,bar,baz`. I want to match `baz` if and only if `fou` is not before it. I've tried negative look behind `(!fou)baz</code but it doesn't work as it only work for immediate following.`<issue_comment>username_1: `re` module does not support variable length lookbehinds, you need to use `regex` module for that.
To get what you want with `re` module, you can use negative lookahead to match `fou` and use a captured group to get `baz`:
```
In [15]: str_ = 'foo,fou,bar,baz'
In [16]: re.search(r'^(?!.*fou.*baz).*(baz)', str_)
In [17]: str_ = 'foo,foz,bar,baz'
In [18]: re.search(r'^(?!.*fou.*baz).*(baz)', str_)
Out[18]: <_sre.SRE_Match object; span=(0, 15), match='foo,foz,bar,baz'>
In [19]: re.search(r'^(?!.*fou.*baz).*(baz)', str_).group(1)
Out[19]: 'baz'
```
In `^(?!.*fou.*baz).*(baz)`:
* The zero width negative lookahead, `(?!.*fou.*baz)`, makes sure `fou` does not come before `baz` in the whole input
* `.*(baz)` puts `baz` in the only captured group
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can even use string methods if you want to avoid the `regex` overhead.
```
string = """
foo,fou,bar,baz
foo,baz
baz, fou
neither nor"""
needle = "baz"
matches = [line
for line in string.split("\n")
for fou in [line.find('fou')]
for baz in [line.find(needle)]
if line and ((fou == -1 and baz > 0) or (baz < fou))]
print(matches)
# ['foo,baz', 'baz, fou']
```
To save a variable `x` in a list comprehension, you need to use `for x in [...]`.
Upvotes: 0
|
2018/03/15
| 332 | 1,118 |
<issue_start>username_0: What is the difference between using a colon, which means "do nothing" and continue, which means skip.
```
if [[ -s $file ]] ; then
:
fi
if [[ -s $file ]] ; then
continue
fi
```<issue_comment>username_1: `:` is a synonym for `true`. It does not prevent later commands in the same block or loop from running.
Compare:
```
for (( i=0; i<3; i++ )); do
echo "Starting iteration $i"
(( i == 1 )) && { echo " About to run :"; :; echo " Just ran :"; }
(( i == 2 )) && { echo " About to run continue"; continue; echo " Just ran continue"; }
echo "Ending iteration $i"
done
```
Our output is:
```none
Starting iteration 0
Ending iteration 0
Starting iteration 1
About to run :
Just ran :
Ending iteration 1
Starting iteration 2
About to run continue
```
Note that we made it to "ending" after running `:`, but not after running `continue`.
Upvotes: 4 [selected_answer]<issue_comment>username_2: It depends on your program's logic.
Outside of a loop you get
```
$ continue
bash: continue: only meaningful in a `for', `while', or `until' loop
```
Upvotes: 2
|
2018/03/15
| 1,588 | 5,899 |
<issue_start>username_0: I am trying to create a share extension where the users can upload her audio recordings from any capable app. The documentation even has a straightforward example (see [Declaring Supported Data Types for a Share or Action Extension](https://developer.apple.com/library/content/documentation/General/Conceptual/ExtensibilityPG/ExtensionScenarios.html#//apple_ref/doc/uid/TP40014214-CH21-SW8)) (also brought up in [this SO answer](https://stackoverflow.com/questions/29546283/ios-share-extension-how-to-support-wav-files)), but it does not work in any audio recorder (on iOS 10.3.3, iPad).
Looking at the [Uniform Type Identifiers Reference](https://developer.apple.com/library/content/documentation/Miscellaneous/Reference/UTIRef/Articles/System-DeclaredUniformTypeIdentifiers.html), I need `public.audio`. The relevant entry in the share extension's `Info.plist`:
[](https://i.stack.imgur.com/qqtHv.png)
To narrow it down, I tried changing `NSExtensionActivationRule` to `public.mpeg4` and `public.audiovisual-content`, same results.
When I change the `NSExtensionActivationRule` to conform to `public.image` though, it shows up in photo apps:
```
SUBQUERY (
extensionItems,
$extensionItem,
SUBQUERY (
$extensionItem.attachments,
$attachment,
ANY $attachment.registeredTypeIdentifiers UTI-CONFORMS-TO "public.image"
).@count == $extensionItem.attachments.@count
).@count == 1
```
Changing `NSExtensionActivationRule` to `TRUEPREDICATE` string would make it to finally show up in audio apps as well, but of course, this wouldn't be accepted in the App Store.
I also tried every above step on a clean start (i.e., clean + clean build folder + delete DerivedData contents + remove app from iPad), but the results where the same.
[This SO question](https://stackoverflow.com/questions/42269842/share-extension-not-appearing-in-the-list-of-apps-on-capable-of-sharing-photo-on) also seemed relevant, but changing the share extension's deployment target didn't help.
Am I missing something?<issue_comment>username_1: >
> NOTE: As <NAME> states in his comment, **changes in iOS 13 make this answer obsolete**.
>
>
>
My share extension would allow uploading audio files to a Firebase backend, and I naively assumed that audio recording apps would appropriately declare their supported file types, but they don't. Most conform to multiple uniform type identifiers (see [reference](https://developer.apple.com/library/content/documentation/Miscellaneous/Reference/UTIRef/Articles/System-DeclaredUniformTypeIdentifiers.html)) with `public.data` being the most common.
I found [this Github issue from 2014](https://github.com/tumblr/ios-extension-issues/issues/5) and used it as a reference. Its title perfectly summarizes the problem:
>
> Share extensions will *only* show up if they explicitly support *all*
> of the provided activity items
>
>
>
The solution of using [`NSExtensionActivationDictionaryVersion`](https://developer.apple.com/library/content/documentation/General/Reference/InfoPlistKeyReference/Articles/AppExtensionKeys.html#//apple_ref/doc/uid/TP40014212-SW36) worked for me too. According to [Information Property Key List Reference](https://developer.apple.com/library/content/documentation/General/Reference/InfoPlistKeyReference/Articles/AppExtensionKeys.html):
>
> **Activation dictionary version 1** Only when the app extension handles
> ***all*** of the asset types being offered by a host app item provider
>
>
> **Activation dictionary version 2** When the app extension handles ***at
> least one*** of the asset types being offered by a host app item provider
>
>
>
My share extension's `Info.plist` looks like this now:
```
NSExtension
NSExtensionAttributes
NSExtensionActivationRule
NSExtensionActivationDictionaryVersion
2
NSExtensionActivationSupportsAttachmentsWithMinCount
1
NSExtensionActivationSupportsFileWithMaxCount
20
NSExtensionMainStoryboard
MainInterface
NSExtensionPointIdentifier
com.apple.share-services
```
Or in Xcode:
[](https://i.stack.imgur.com/TjqeH.png)
Using only `NSExtensionActivationSupportsFileWithMaxCount` besides `NSExtensionActivationDictionaryVersion` would probably have sufficed.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Adding `public.mpeg-4-audio` helps show app in sharesheet where it previously won't show for `public.audio`. It might be unfeasible to do this for each and every extension type. But the list of extensions can be found at [Uniform Type Identifiers](https://www.escapetech.eu/manuals/qdrop/uti.html) under `Audiovisual content types`.
The best case scenerio seems to be structuring the query like this. I have included some famous extensions, but there are others in the link above.
```
SUBQUERY(
$extensionItem.attachments,
$attachment,
ANY $attachment.registeredTypeIdentifiers UTI-CONFORMS-TO "public.audio" ||
ANY $attachment.registeredTypeIdentifiers UTI-CONFORMS-TO "public.mpeg-4-audio" ||
ANY $attachment.registeredTypeIdentifiers UTI-CONFORMS-TO "public.mp3" ||
ANY $attachment.registeredTypeIdentifiers UTI-CONFORMS-TO "com.microsoft.windows-media-wma" ||
ANY $attachment.registeredTypeIdentifiers UTI-CONFORMS-TO "public.aifc-audio" ||
ANY $attachment.registeredTypeIdentifiers UTI-CONFORMS-TO "public.aiff-audio" ||
ANY $attachment.registeredTypeIdentifiers UTI-CONFORMS-TO "public.midi-audio" ||
ANY $attachment.registeredTypeIdentifiers UTI-CONFORMS-TO "public.ac3-audio" ||
ANY $attachment.registeredTypeIdentifiers UTI-CONFORMS-TO "com.microsoft.waveform-audio" ||
ANY $attachment.registeredTypeIdentifiers UTI-CONFORMS-TO "public.OTHER_EXTENSIONS" ||
).@count = 1
```
Upvotes: 0
|
2018/03/15
| 879 | 3,288 |
<issue_start>username_0: I have the [similar problem](https://stackoverflow.com/questions/23384483/mousemove-event-updating-only-once-after-mousedown) but I'm using pure js. By my understanding after I clicked on the div it should start print 'Mousemove event has occurred' into the console every time I move the cursor over the div but actually it happens only once when I click the div which is weird as well because click is not a mosemove. Can you please help me to understand this behavior?
```html
let div = document.getElementsByTagName('div')[0];
function handleClick(event){
div.addEventListener('onmousemove', handleMove(event));
}
function handleMove(event){
console.log('Mousemove event has occurred');
}
```
Looks like it ignores this part `div.addEventListener('onmousemove', handleMove(event));` and executes `handleMove(event);` instead:
```html
let div = document.getElementsByTagName('div')[0];
function handleClick(event){
handleMove(event);
}
function handleMove(event){
console.log('Mousemove event has occurred');
}
```
Update: I was thinking to pass `handleMove` without `()` but got confused by Atom IDE (due to pale highlight), [so do not let Atom confuse you!](https://i.stack.imgur.com/15Xxs.png)<issue_comment>username_1: >
> Looks like it ignores this part div.addEventListener('onmousemove', handleMove(event)); and executes handleMove(event); instead:
>
>
>
Yes, because it is what you wrote ;)
```
div.addEventListener('onmousemove', handleMove(event));
```
The above executes the `handleMove` function passing it the `event`.
Compare it to actually just creating an event handler (no parenthesis after `handleMove`):
```html
let div = document.getElementsByTagName('div')[0];
function handleClick(event){
console.log(`handleClick`);
div.addEventListener('mousemove', handleMove);
}
function handleMove(event){
console.log('Mousemove event has occurred');
}
```
**Also: the event is [`mousemove`](https://developer.mozilla.org/en-US/docs/Web/Events/mousemove) not [`onmousemove`](https://developer.mozilla.org/en-US/docs/Web/API/GlobalEventHandlers/onmousemove).**
Upvotes: 3 [selected_answer]<issue_comment>username_2: the event listener should be "mousemove" and not "onmousemove".
something like this:
```html
let div = document.getElementsByTagName('div')[0];
function handleClick(event){
div.addEventListener('mousemove', function(event) { handleMove(event) });
}
function handleMove(event){
console.log('Mousemove event has occurred');
}
```
Upvotes: 0 <issue_comment>username_3: If you look at the syntax of addEventListener which is `document.addEventListener(event, function, useCapture)` here for parameter `event` clearly mention that([here](https://www.w3schools.com/jsreF/met_document_addeventlistener.asp))
>
> Required. A String that specifies the name of the event.
>
>
> **Note:** Do not use the "on" prefix. For example, use "click" instead of "onclick".
>
>
>
So replace 'onmousemove' to 'mousemove'
```html
let div = document.getElementsByTagName('div')[0];
function handleClick(event){
div.addEventListener('mousemove', handleMove);
}
function handleMove(event){
console.log('Mousemove event has occurred');
}
```
Upvotes: 0
|
2018/03/15
| 336 | 907 |
<issue_start>username_0: What is the most efficient way to cut off a varchar after the first space i.e. given `'FIRST_STRING SECOND_STRING THIRD_STRING'`, it should return `'FIRST_STRING'`? This will be run on potentially 100s of thousands of rows.<issue_comment>username_1: Notice the `+' '` this will trap any single word
**Example**
```
Declare @S varchar(max) = 'FIRST_STRING SECOND_STRING THIRD_STRING'
Select left(@S,charindex(' ',@S+' ')-1)
```
**Returns**
```
FIRST_STRING
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: >
>
> ```
> declare @str nvarchar(max) = 'FIRST_STRING SECOND_STRING THIRD_STRING';
>
> select left(@str, charindex(' ', @str) - 1)
> GO
>
> ```
>
>
> ```
>
> | (No column name) |
> | :--------------- |
> | FIRST_STRING |
>
> ```
>
>
*dbfiddle [here](http://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=52a75033740a4f7c53729ecdc349fd1f)*
Upvotes: 1
|
2018/03/15
| 677 | 2,268 |
<issue_start>username_0: Normally I anonymize my data by using hashlib and using the .apply(hash) function.
Now im trying a new approach, imagine I have to following df called 'data':
```
df = pd.DataFrame({'contributor':['eric', 'frank', 'john', 'frank', 'barbara'],
'amount payed':[10,28,49,77,31]})
contributor amount payed
0 eric 10
1 frank 28
2 john 49
3 frank 77
4 barbara 31
```
Which I want to anonymize by turning the names all into `person1`, `person2` etc, like this:
```
output = pd.DataFrame({'contributor':['person1', 'person2', 'person3', 'person2', 'person4'],
'amount payed':[10,28,49,77,31]})
contributor amount payed
0 person1 10
1 person2 28
2 person3 49
3 person2 77
4 person4 31
```
So my first though was summarizing the **name** column so the names are attached to a unique index and I can use that index for the number after 'person'.<issue_comment>username_1: Maybe try to create a data frame called "index" for this operation and keep unique `name` values inside it?
Then produce masks with unique name indexes and merge the resulting data frame `index`with `data`.
```
index = pd.DataFrame()
index['name'] = df['name'].unique()
index['mask'] = index['name'].apply(lambda x : 'person' +
str(index[index.name == x].index[0] + 1))
data.merge(index, how='left')[['mask', 'amount']]
```
Upvotes: 0 <issue_comment>username_2: I think faster solution is use [`factorize`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.factorize.html) for unique values, add `1`, convert to `Series` and `string`s and prepend `Person` string:
```
df['contributor'] = 'Person' + pd.Series(pd.factorize(df['contributor'])[0] + 1).astype(str)
print (df)
contributor amount payed
0 Person1 10
1 Person2 28
2 Person3 49
3 Person2 77
4 Person4 31
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: ```
labels, uniques = pd.factorize(df['name'])
labels = ['person_'+str(l) for l in labels]
df['contributor_anonymized'] = labels
```
Upvotes: 1
|
2018/03/15
| 2,044 | 7,702 |
<issue_start>username_0: I am using react router v4 with thunk for routing in my application.
I want to prevent rendering component to user who not logged in. I sending fetch request on server with id and token to check in database do user has this token. If it has - render , if not - redirect home.
I don't understand what is good way to implement the "conditional routing", and i found something which seems almost perfectly fit to my task.
`https://gist.github.com/kud/6b722de9238496663031dbacd0412e9d`
But the problem is that `condition` in is always undefined, because of fetch's asyncronosly. My attempts to deal with this asyncronously ended with nothing or errors:
```
Objects are not valid as a React child (found: [object Promise]) ...
```
or
```
RouteIf(...): Nothing was returned from render. ...
```
Here is the code:
```
//RootComponent
{
if( store.getState().userReducer.id, store.getState().userReducer.token) {
// Here i sending id and token on server
// to check in database do user with this id
// has this token
fetch(CHECK\_TOKEN\_API\_URL, {
method: 'post',
headers: {'Accept': 'application/json', 'Content-Type': 'application/json'},
body: JSON.stringify({
id: store.getState().userReducer.id,
token: store.getState().userReducer.token
})
})
.then res => {
// If true – will render ,
// else -
// But mounts without await of this return
// You can see RouteIf file below
if(res.ok) return true
else return false
})
}
})()}
privateRoute={true}
path="/account"
component={AccountPage}
/>
//RouteIf.js
const RouteIf = ({ condition, privateRoute, path, component }) => {
// The problem is that condition is
// always undefined, because of fetch's asyncronosly
// How to make it wait untill
// return result?
return condition
? ()
:()
}
export default RouteIf
```
How to make `condition` wait until `fetch` return answer? Or maybe there is another, better way to check if user logged in?<issue_comment>username_1: YOu can wrap your route in a stateful component.
Then, on `componentDidMount` check the token and and set token in state.
Then in render conditionnaly mount the route on state property.
```
class CheckToken extends React.Component {
constructor() {
this.state = { isLogged: false}
}
componentDidMount() {
fetch(url).then(res => res.text()).then(token => this.setState({isLogged: token})
}
render() {
return this.state.isLogged ? : null
}
}
```
Upvotes: -1 <issue_comment>username_2: Solution was add second flag: gotUnswerFromServer. Without it component always redirected to "/", without waiting on answer from server.
```
export default class PrivateRoute extends React.Component {
constructor(props){
super(props);
this.state = {
isLogged: false,
gotUnswerFromServer: false
}
}
componentDidMount(){
const session = read_cookie('session');
fetch(CHECK_TOKEN_API_URL, {
method: 'post',
headers: {'Accept': 'application/json', 'Content-Type': 'application/json'},
body: JSON.stringify({ id: session.id, token: session.token })
}).then( res => {
if(res.ok) this.setState({ gotUnswerFromServer: true, isLogged: true })
})
}
render() {
if( this.state.gotUnswerFromServer ){
if( this.state.isLogged ) return
else return
} else return null
}
}
```
Upvotes: 0 <issue_comment>username_3: async private router react
==========================
Don't know if this will help but after searching the entire Internet came to this decision:
<https://hackernoon.com/react-authentication-in-depth-part-2-bbf90d42efc9>
<https://github.com/dabit3/react-authentication-in-depth/blob/master/src/Router.js>
my case was to redirect from hidden to home page if user did not have required role:
====================================================================================
### PrivateRoute
```
import React, { Component } from 'react';
import { Route, Redirect, withRouter } from 'react-router-dom';
import PropTypes from 'prop-types';
import { roleChecker } from '../helpers/formatter';
import { userInfoFetch } from '../api/userInfo';
class PrivateRoute extends Component {
state = {
haveAcces: false,
loaded: false,
}
componentDidMount() {
this.checkAcces();
}
checkAcces = () => {
const { userRole, history } = this.props;
let { haveAcces } = this.state;
// your fetch request
userInfoFetch()
.then(data => {
const { userRoles } = data.data;
haveAcces = roleChecker(userRoles, userRole); // true || false
this.setState({
haveAcces,
loaded: true,
});
})
.catch(() => {
history.push('/');
});
}
render() {
const { component: Component, ...rest } = this.props;
const { loaded, haveAcces } = this.state;
if (!loaded) return null;
return (
{
return haveAcces ? (
) : (
);
}}
/>
);
}
}
export default withRouter(PrivateRoute);
PrivateRoute.propTypes = {
userRole: PropTypes.string.isRequired,
};
```
### ArticlesRoute
```
import React from 'react';
import { Route, Switch } from 'react-router-dom';
import PrivateRoute from '../PrivateRoute';
// pages
import Articles from '../../pages/Articles';
import ArticleCreate from '../../pages/ArticleCreate';
const ArticlesRoute = () => {
return (
);
};
export default ArticlesRoute;
```
Upvotes: 1 <issue_comment>username_4: If you are using redux you can show temporary 'loading ...' view. The route will be redirected only if a user is null and loaded.
PrivateRoute.js
```
import React from 'react';
import PropTypes from 'prop-types';
import { useSelector } from 'react-redux';
import { Route, Redirect } from 'react-router-dom';
import { selectors } from 'settings/reducer';
const PrivateRoute = ({ component: Component, ...rest }) => {
const user = useSelector(state => selectors.user(state));
const isLoaded = useSelector(state => selectors.isLoaded(state));
return (
!isLoaded ? (
<>
) : user ? (
) : (
)
}
/>
);
};
export default PrivateRoute;
PrivateRoute.propTypes = {
component: PropTypes.any
};
```
routes.js
```
import React from 'react';
import { BrowserRouter, Route, Switch } from 'react-router-dom';
import PrivateRoute from './components/PrivateRoute';
export const Routes = () => (
);
```
Upvotes: 2 <issue_comment>username_5: In my case the problem was that after each refresh of the private page system redirected user to home before auth checking was executed, by default token value in the store was null, so user was unauthorized by default.
I fixed it by changing default redux state token value to undefined.
"undefined" means in my case that the system didn't check yet if the user is authorized.
if user authorized token value will be some string,
if not authorized - null,
so PrivateRoute component looks
```
import React from 'react';
import {Redirect, Route} from "react-router-dom";
import {connect} from "react-redux";
const PrivateRoute = ({children, token, ...props}) => {
const renderChildren = () => {
if (!!token) {// if it's a string - show children
return children;
} else if (token === undefined) { // if undefined show nothing, but not redirect
return null; // no need to show even loader, but if necessary, show it here
} else { // else if null show redirect
return (
);
}
};
return (
{renderChildren()}
)
};
function mapStateToProps(state) {
return {
token: state.auth.token,
}
}
export default connect(mapStateToProps)(PrivateRoute);
```
App.js
```
```
Upvotes: 2
|
2018/03/15
| 2,448 | 8,847 |
<issue_start>username_0: I think that `java.time.Instant` is the best choice to store a date into DB: it is the most likely *TIMESTAMP* and you are not depending by timezone, it is just a moment on the time.
JPA supports `LocalDate`, `LocalTime`, `LocalDateTime` etc. but not Instant. Sure, you can use either `AttributeConverter` or some libraries like [Jadira](http://jadira.sourceforge.net) but why it isn't supported out of the box?<issue_comment>username_1: I'll try this again. There is some discussion in [the issue](https://github.com/eclipse-ee4j/jpa-api/issues/63). The latest discussion seems to be:
>
> mkarg said: While that is absolutely correct, the technical answer is
> a bit more complex: What is the final predicate that makes a data type
> eligible for inclusion in the set of mandatory type mappings?
>
>
> One could say, that predicate is "being essential" or "being of common
> use", but who defines what "essential" or "common use" is? See, for
> some applications, support for java.awt.Image and java.net.URL might
> be much more essential than support for LocalDate or ZonedDateTime. On
> the other hand, other applications might be full of LocalDate but
> never uses Instant. So where exactly to make the cut? This becomes
> particularly complex when looking at the sheer amount of types found
> in the JRE, and it is obvious there has to be a cut somewhere. Even
> JavaFX, which is bundled with the JRE, does not support Instant still
> in v8, so why should JPA? And looking at the current progress of
> Project Jigsaw, possibly the qualifying predicate might simply be
> answered by "all types in a particular jigsaw module"?
>
>
> Anyways, it is not up to me to decide. I do support your request, and
> would love to see support for rather all Java Time API times,
> particularly for Instant and Duration, and your request has prominent
> supporters like for example Java Champion <NAME>a as I learned
> recently. But I doubt the final answer will be as simple an satisfying
> as we would love to have it.
>
>
> Maybe it would be better to simply set up another JSR, like "Common
> Data Type Conversions for the Java Platform", which provides much more
> mappings than just date and time, but also would not be bound to JPA
> but also could be used by JAXB, JAX-RS, and possibly more API that
> deal which the problem of transforming " to "? Having such a vehicle
> would really reduce boilerplate a lot.
>
>
>
**TL-DR; There are a lot of types. We had to draw the line somewhere.**
There is [a new issue](https://github.com/eclipse-ee4j/jpa-api/issues/163) for it to be added to a future JPA version.
Another interesting bit of analysis I found on [a thread](https://sourceforge.net/p/threeten/mailman/message/31733363/) by [<NAME>](http://wiki.c2.com/?DouglasSurber) (works on JDBC):
>
>
> >
> > The JDK 8 version of JDBC includes support for most of the SQL types
> > that correspond to 310 classes.
> >
> >
> > * DATE - LocalDate
> > * TIME - LocalTime
> > * TIMESTAMP WITH OUT TIME ZONE - LocalDateTime
> > * TIMESTAMP WITH TIME ZONE - OffsetDateTime
> >
> >
> > JDK 8 version of JDBC does not include a mapping between the INTERVAL
> > types and the corresponding 310 classes.
> >
> >
> > There is no SQL type that exactly corresponds to any other 310
> > classes. As a result, the JDBC spec is silent for all other classes.
> >
> >
> > I would strongly encourage JDBC developers to use the new 310
> > classes. There are problems with java.util.Date, java.sql.Date,
> > java.sql.Time, and java.sql.Timestamp. You should consider them
> > deprecated. The 310 classes are vastly superior.
> >
> >
> > Douglas
> >
> >
> >
>
>
>
**TL:DR; We just picked one Java 8 type for each of the 4 possible ways you might store temporal data in the database.**
Finally, if you read through [this thread](http://download.oracle.com/javaee-archive/jpa-spec.java.net/users/2017/05/1048.html) it appears there is significant cultural pressure to keep standard APIs small and simple.
Upvotes: 4 <issue_comment>username_2: It's not a JPA issue but a JDBC issue.
JDBC supports Date, Timestamp, LocalDate, LocalTime and LocalDateTime but **NOT Instant**.
This is not a Java issue but an SQL issue whereby what is stored in the database is year-month-day-hour-minute-second construct.
Think of SQL functions :
YEAR()
MONTH()
etc...
These functions cannot be applied to simple millisencods since 1970 number. They do require a LocalDateTime construct to perform these functions.
Upvotes: -1 <issue_comment>username_3: That is a $ 1M question.
Why it wasn't supported in the past, I don't know. It is indeed odd. However, it [seems](https://github.com/jakartaee/persistence/commit/694f93964a31c7946e49e30c91635e34a77c38b9) it might become part of upcoming JPA 3.2 Spec. In my opinion `Instant` should have been the first JSR-310 type to support - not the last coming along 10 years later.
Here is some background:
Almost all RDBMSes can indeed only store an Instant. Some, like PostgreSQL `timestamptz`, will give you the *illusion* that they can do more, but it is really a hoax as pointed out by many others.
If your entity class looks like this
```java
@Entity
public class MyEntity {
...
@Column(name="created_at")
private ZonedDateTime createdAt;
public ZonedDateTime getCreatedAt() {
return this.createdAt;
}
public void setCreatedAt(ZonedDateTime ts) {
this.createdAt = ts;
}
}
```
you are most likely **doing it wrong**: With this, your persistence layer code is giving you the illusion that it can store a `ZonedDateTime` for you while in reality it can't.
(the notable exception is Oracle Database which can actually store a ZonedDateTime without loss of information, but I must admit I've never seen it used in real life).
I've found [this comment](https://github.com/eclipse-ee4j/eclipselink/issues/259#issuecomment-427973730):
>
> Instant is not covered by JDBC, so is not by JPA. At least not yet.
>
>
>
which may explain why the JPA maintainers did not (until recently) acknowledge why it should be mentioned in the JPA Spec. But the above statement is wrong. Support for `Instant` in JDBC has been there all along. Let me explain:
### Reading and writing Instant values using JDBC
The JDBC `java.sql.Timestamp` and `Instant` is essentially the same thing. Except for some outliers caused by the fact that `Instant` can store dates further in the past or into the future than can `Date` (which `Timestamp` extends from), there is is a lossless conversion between the two. (we are talking about dates in future year 292,469,238 and beyond which would get you into trouble and similar nonsense on the past side, so yes, for all *practical purpose* there is lossless conversion between the two).
So all that is left for us is to explain to the JDBC driver that we provide and expect values in UTC.
Suppose we have
```java
private static final Calendar UTC_CAL = Calendar.getInstance(TimeZone.getTimeZone(ZoneOffset.UTC));
```
then **reading** an `Instant` from a database of column type `TIMESTAMP` can be done like this:
```java
Instant myInstant = resultSet.getTimestamp(colIdx, UTC_CAL).toInstant();
```
while **writing** can be done like this:
```java
preparedStatement.setTimestamp(colIdx, Timestamp.from(myInstant), UTC_CAL);
```
It should be noted that the above methodology is "safe", meaning it will work consistently regardless of the database server's setting for default timezone or your own JVM's timezone setting.
### ORM support
As you can imagine, those ORMs which indeed supports `Instant` behind the scenes do exactly as above. Hibernate and Nucleus support `Instant`, EclipseLink not yet.
Your entity class should simply look like this:
```java
@Entity
public class MyEntity {
...
@Column(name="created_at")
private Instant createdAt;
// getters and setters
}
```
When using Hibernate, you can find many tales on the Internet of having to set `hibernate.jdbc.time_zone` to `UTC`. This is not necessary with the above, at least not with Hibernate 6. The reason is that Hibernate can see your intent (you specify `Instant`, not any of the other JSR-310 types), so it knows it *has* to use a static `UTC_CAL` when reading or writing the values from/to the database.
### Should you be using `Instant` in JPA code ?
As stated, `Instant` is not currently in the JPA Spec but seems to be finally coming up. There are two reasons why I would gladly use it anyway:
* Hibernate is Spring's default. Yes, you can probably use something else, but I bet very few do. So using something Hibernate specific doesn't bother me too much.
* When `Instant` finally comes to the JPA Spec, I bet it will simply work just like Hibernate already do. So no change.
Upvotes: 1
|
2018/03/15
| 2,702 | 9,821 |
<issue_start>username_0: For some reason I mixed a table of posts with comments....So this is a post when `comment_of=0` and a comment when `comment_of= (number of a post)` like in this example
Posts
```
id_post | id_poster | comment_of | post
23 3 0 hi
24 4 0 hello
25 5 0 how are you
26 3 25 lets go
27 3 24 come on
28 4 25 haha
29 3 25 go away
30 2 24 aint funny
31 3 23 lol
32 3 0 ok
35 3 0 here we go
```
Now what I want is to order the post by the number of `comments.comment_of` it has. DESC.
**MySql** Something like this
```
SELECT post
FROM posts
WHERE comment_of=0
ORDER BY (number of `comment of`) DESC
```
What I expect is something like this
```
id_post | id_poster | comment_of | post
25 5 0 hi (3 comments)
24 4 0 hello (2 comments)
23 3 0 how are you (1 comment)
32 3 0 ok (0 comments)
35 3 0 here we go (0 comments)
```<issue_comment>username_1: I'll try this again. There is some discussion in [the issue](https://github.com/eclipse-ee4j/jpa-api/issues/63). The latest discussion seems to be:
>
> mkarg said: While that is absolutely correct, the technical answer is
> a bit more complex: What is the final predicate that makes a data type
> eligible for inclusion in the set of mandatory type mappings?
>
>
> One could say, that predicate is "being essential" or "being of common
> use", but who defines what "essential" or "common use" is? See, for
> some applications, support for java.awt.Image and java.net.URL might
> be much more essential than support for LocalDate or ZonedDateTime. On
> the other hand, other applications might be full of LocalDate but
> never uses Instant. So where exactly to make the cut? This becomes
> particularly complex when looking at the sheer amount of types found
> in the JRE, and it is obvious there has to be a cut somewhere. Even
> JavaFX, which is bundled with the JRE, does not support Instant still
> in v8, so why should JPA? And looking at the current progress of
> Project Jigsaw, possibly the qualifying predicate might simply be
> answered by "all types in a particular jigsaw module"?
>
>
> Anyways, it is not up to me to decide. I do support your request, and
> would love to see support for rather all Java Time API times,
> particularly for Instant and Duration, and your request has prominent
> supporters like for example Java Champion <NAME> as I learned
> recently. But I doubt the final answer will be as simple an satisfying
> as we would love to have it.
>
>
> Maybe it would be better to simply set up another JSR, like "Common
> Data Type Conversions for the Java Platform", which provides much more
> mappings than just date and time, but also would not be bound to JPA
> but also could be used by JAXB, JAX-RS, and possibly more API that
> deal which the problem of transforming " to "? Having such a vehicle
> would really reduce boilerplate a lot.
>
>
>
**TL-DR; There are a lot of types. We had to draw the line somewhere.**
There is [a new issue](https://github.com/eclipse-ee4j/jpa-api/issues/163) for it to be added to a future JPA version.
Another interesting bit of analysis I found on [a thread](https://sourceforge.net/p/threeten/mailman/message/31733363/) by [<NAME>](http://wiki.c2.com/?DouglasSurber) (works on JDBC):
>
>
> >
> > The JDK 8 version of JDBC includes support for most of the SQL types
> > that correspond to 310 classes.
> >
> >
> > * DATE - LocalDate
> > * TIME - LocalTime
> > * TIMESTAMP WITH OUT TIME ZONE - LocalDateTime
> > * TIMESTAMP WITH TIME ZONE - OffsetDateTime
> >
> >
> > JDK 8 version of JDBC does not include a mapping between the INTERVAL
> > types and the corresponding 310 classes.
> >
> >
> > There is no SQL type that exactly corresponds to any other 310
> > classes. As a result, the JDBC spec is silent for all other classes.
> >
> >
> > I would strongly encourage JDBC developers to use the new 310
> > classes. There are problems with java.util.Date, java.sql.Date,
> > java.sql.Time, and java.sql.Timestamp. You should consider them
> > deprecated. The 310 classes are vastly superior.
> >
> >
> > Douglas
> >
> >
> >
>
>
>
**TL:DR; We just picked one Java 8 type for each of the 4 possible ways you might store temporal data in the database.**
Finally, if you read through [this thread](http://download.oracle.com/javaee-archive/jpa-spec.java.net/users/2017/05/1048.html) it appears there is significant cultural pressure to keep standard APIs small and simple.
Upvotes: 4 <issue_comment>username_2: It's not a JPA issue but a JDBC issue.
JDBC supports Date, Timestamp, LocalDate, LocalTime and LocalDateTime but **NOT Instant**.
This is not a Java issue but an SQL issue whereby what is stored in the database is year-month-day-hour-minute-second construct.
Think of SQL functions :
YEAR()
MONTH()
etc...
These functions cannot be applied to simple millisencods since 1970 number. They do require a LocalDateTime construct to perform these functions.
Upvotes: -1 <issue_comment>username_3: That is a $ 1M question.
Why it wasn't supported in the past, I don't know. It is indeed odd. However, it [seems](https://github.com/jakartaee/persistence/commit/694f93964a31c7946e49e30c91635e34a77c38b9) it might become part of upcoming JPA 3.2 Spec. In my opinion `Instant` should have been the first JSR-310 type to support - not the last coming along 10 years later.
Here is some background:
Almost all RDBMSes can indeed only store an Instant. Some, like PostgreSQL `timestamptz`, will give you the *illusion* that they can do more, but it is really a hoax as pointed out by many others.
If your entity class looks like this
```java
@Entity
public class MyEntity {
...
@Column(name="created_at")
private ZonedDateTime createdAt;
public ZonedDateTime getCreatedAt() {
return this.createdAt;
}
public void setCreatedAt(ZonedDateTime ts) {
this.createdAt = ts;
}
}
```
you are most likely **doing it wrong**: With this, your persistence layer code is giving you the illusion that it can store a `ZonedDateTime` for you while in reality it can't.
(the notable exception is Oracle Database which can actually store a ZonedDateTime without loss of information, but I must admit I've never seen it used in real life).
I've found [this comment](https://github.com/eclipse-ee4j/eclipselink/issues/259#issuecomment-427973730):
>
> Instant is not covered by JDBC, so is not by JPA. At least not yet.
>
>
>
which may explain why the JPA maintainers did not (until recently) acknowledge why it should be mentioned in the JPA Spec. But the above statement is wrong. Support for `Instant` in JDBC has been there all along. Let me explain:
### Reading and writing Instant values using JDBC
The JDBC `java.sql.Timestamp` and `Instant` is essentially the same thing. Except for some outliers caused by the fact that `Instant` can store dates further in the past or into the future than can `Date` (which `Timestamp` extends from), there is is a lossless conversion between the two. (we are talking about dates in future year 292,469,238 and beyond which would get you into trouble and similar nonsense on the past side, so yes, for all *practical purpose* there is lossless conversion between the two).
So all that is left for us is to explain to the JDBC driver that we provide and expect values in UTC.
Suppose we have
```java
private static final Calendar UTC_CAL = Calendar.getInstance(TimeZone.getTimeZone(ZoneOffset.UTC));
```
then **reading** an `Instant` from a database of column type `TIMESTAMP` can be done like this:
```java
Instant myInstant = resultSet.getTimestamp(colIdx, UTC_CAL).toInstant();
```
while **writing** can be done like this:
```java
preparedStatement.setTimestamp(colIdx, Timestamp.from(myInstant), UTC_CAL);
```
It should be noted that the above methodology is "safe", meaning it will work consistently regardless of the database server's setting for default timezone or your own JVM's timezone setting.
### ORM support
As you can imagine, those ORMs which indeed supports `Instant` behind the scenes do exactly as above. Hibernate and Nucleus support `Instant`, EclipseLink not yet.
Your entity class should simply look like this:
```java
@Entity
public class MyEntity {
...
@Column(name="created_at")
private Instant createdAt;
// getters and setters
}
```
When using Hibernate, you can find many tales on the Internet of having to set `hibernate.jdbc.time_zone` to `UTC`. This is not necessary with the above, at least not with Hibernate 6. The reason is that Hibernate can see your intent (you specify `Instant`, not any of the other JSR-310 types), so it knows it *has* to use a static `UTC_CAL` when reading or writing the values from/to the database.
### Should you be using `Instant` in JPA code ?
As stated, `Instant` is not currently in the JPA Spec but seems to be finally coming up. There are two reasons why I would gladly use it anyway:
* Hibernate is Spring's default. Yes, you can probably use something else, but I bet very few do. So using something Hibernate specific doesn't bother me too much.
* When `Instant` finally comes to the JPA Spec, I bet it will simply work just like Hibernate already do. So no change.
Upvotes: 1
|
2018/03/15
| 2,394 | 8,630 |
<issue_start>username_0: I was wondering if it is possible to turn my PHP, MySQL project into a Native Android App (.apk)?
I did a research about the topic, but I didn't find a straight answer.
Any opinion or idea?
Br,<issue_comment>username_1: I'll try this again. There is some discussion in [the issue](https://github.com/eclipse-ee4j/jpa-api/issues/63). The latest discussion seems to be:
>
> mkarg said: While that is absolutely correct, the technical answer is
> a bit more complex: What is the final predicate that makes a data type
> eligible for inclusion in the set of mandatory type mappings?
>
>
> One could say, that predicate is "being essential" or "being of common
> use", but who defines what "essential" or "common use" is? See, for
> some applications, support for java.awt.Image and java.net.URL might
> be much more essential than support for LocalDate or ZonedDateTime. On
> the other hand, other applications might be full of LocalDate but
> never uses Instant. So where exactly to make the cut? This becomes
> particularly complex when looking at the sheer amount of types found
> in the JRE, and it is obvious there has to be a cut somewhere. Even
> JavaFX, which is bundled with the JRE, does not support Instant still
> in v8, so why should JPA? And looking at the current progress of
> Project Jigsaw, possibly the qualifying predicate might simply be
> answered by "all types in a particular jigsaw module"?
>
>
> Anyways, it is not up to me to decide. I do support your request, and
> would love to see support for rather all Java Time API times,
> particularly for Instant and Duration, and your request has prominent
> supporters like for example Java Champion Arun Gupa as I learned
> recently. But I doubt the final answer will be as simple an satisfying
> as we would love to have it.
>
>
> Maybe it would be better to simply set up another JSR, like "Common
> Data Type Conversions for the Java Platform", which provides much more
> mappings than just date and time, but also would not be bound to JPA
> but also could be used by JAXB, JAX-RS, and possibly more API that
> deal which the problem of transforming " to "? Having such a vehicle
> would really reduce boilerplate a lot.
>
>
>
**TL-DR; There are a lot of types. We had to draw the line somewhere.**
There is [a new issue](https://github.com/eclipse-ee4j/jpa-api/issues/163) for it to be added to a future JPA version.
Another interesting bit of analysis I found on [a thread](https://sourceforge.net/p/threeten/mailman/message/31733363/) by [<NAME>](http://wiki.c2.com/?DouglasSurber) (works on JDBC):
>
>
> >
> > The JDK 8 version of JDBC includes support for most of the SQL types
> > that correspond to 310 classes.
> >
> >
> > * DATE - LocalDate
> > * TIME - LocalTime
> > * TIMESTAMP WITH OUT TIME ZONE - LocalDateTime
> > * TIMESTAMP WITH TIME ZONE - OffsetDateTime
> >
> >
> > JDK 8 version of JDBC does not include a mapping between the INTERVAL
> > types and the corresponding 310 classes.
> >
> >
> > There is no SQL type that exactly corresponds to any other 310
> > classes. As a result, the JDBC spec is silent for all other classes.
> >
> >
> > I would strongly encourage JDBC developers to use the new 310
> > classes. There are problems with java.util.Date, java.sql.Date,
> > java.sql.Time, and java.sql.Timestamp. You should consider them
> > deprecated. The 310 classes are vastly superior.
> >
> >
> > Douglas
> >
> >
> >
>
>
>
**TL:DR; We just picked one Java 8 type for each of the 4 possible ways you might store temporal data in the database.**
Finally, if you read through [this thread](http://download.oracle.com/javaee-archive/jpa-spec.java.net/users/2017/05/1048.html) it appears there is significant cultural pressure to keep standard APIs small and simple.
Upvotes: 4 <issue_comment>username_2: It's not a JPA issue but a JDBC issue.
JDBC supports Date, Timestamp, LocalDate, LocalTime and LocalDateTime but **NOT Instant**.
This is not a Java issue but an SQL issue whereby what is stored in the database is year-month-day-hour-minute-second construct.
Think of SQL functions :
YEAR()
MONTH()
etc...
These functions cannot be applied to simple millisencods since 1970 number. They do require a LocalDateTime construct to perform these functions.
Upvotes: -1 <issue_comment>username_3: That is a $ 1M question.
Why it wasn't supported in the past, I don't know. It is indeed odd. However, it [seems](https://github.com/jakartaee/persistence/commit/694f93964a31c7946e49e30c91635e34a77c38b9) it might become part of upcoming JPA 3.2 Spec. In my opinion `Instant` should have been the first JSR-310 type to support - not the last coming along 10 years later.
Here is some background:
Almost all RDBMSes can indeed only store an Instant. Some, like PostgreSQL `timestamptz`, will give you the *illusion* that they can do more, but it is really a hoax as pointed out by many others.
If your entity class looks like this
```java
@Entity
public class MyEntity {
...
@Column(name="created_at")
private ZonedDateTime createdAt;
public ZonedDateTime getCreatedAt() {
return this.createdAt;
}
public void setCreatedAt(ZonedDateTime ts) {
this.createdAt = ts;
}
}
```
you are most likely **doing it wrong**: With this, your persistence layer code is giving you the illusion that it can store a `ZonedDateTime` for you while in reality it can't.
(the notable exception is Oracle Database which can actually store a ZonedDateTime without loss of information, but I must admit I've never seen it used in real life).
I've found [this comment](https://github.com/eclipse-ee4j/eclipselink/issues/259#issuecomment-427973730):
>
> Instant is not covered by JDBC, so is not by JPA. At least not yet.
>
>
>
which may explain why the JPA maintainers did not (until recently) acknowledge why it should be mentioned in the JPA Spec. But the above statement is wrong. Support for `Instant` in JDBC has been there all along. Let me explain:
### Reading and writing Instant values using JDBC
The JDBC `java.sql.Timestamp` and `Instant` is essentially the same thing. Except for some outliers caused by the fact that `Instant` can store dates further in the past or into the future than can `Date` (which `Timestamp` extends from), there is is a lossless conversion between the two. (we are talking about dates in future year 292,469,238 and beyond which would get you into trouble and similar nonsense on the past side, so yes, for all *practical purpose* there is lossless conversion between the two).
So all that is left for us is to explain to the JDBC driver that we provide and expect values in UTC.
Suppose we have
```java
private static final Calendar UTC_CAL = Calendar.getInstance(TimeZone.getTimeZone(ZoneOffset.UTC));
```
then **reading** an `Instant` from a database of column type `TIMESTAMP` can be done like this:
```java
Instant myInstant = resultSet.getTimestamp(colIdx, UTC_CAL).toInstant();
```
while **writing** can be done like this:
```java
preparedStatement.setTimestamp(colIdx, Timestamp.from(myInstant), UTC_CAL);
```
It should be noted that the above methodology is "safe", meaning it will work consistently regardless of the database server's setting for default timezone or your own JVM's timezone setting.
### ORM support
As you can imagine, those ORMs which indeed supports `Instant` behind the scenes do exactly as above. Hibernate and Nucleus support `Instant`, EclipseLink not yet.
Your entity class should simply look like this:
```java
@Entity
public class MyEntity {
...
@Column(name="created_at")
private Instant createdAt;
// getters and setters
}
```
When using Hibernate, you can find many tales on the Internet of having to set `hibernate.jdbc.time_zone` to `UTC`. This is not necessary with the above, at least not with Hibernate 6. The reason is that Hibernate can see your intent (you specify `Instant`, not any of the other JSR-310 types), so it knows it *has* to use a static `UTC_CAL` when reading or writing the values from/to the database.
### Should you be using `Instant` in JPA code ?
As stated, `Instant` is not currently in the JPA Spec but seems to be finally coming up. There are two reasons why I would gladly use it anyway:
* Hibernate is Spring's default. Yes, you can probably use something else, but I bet very few do. So using something Hibernate specific doesn't bother me too much.
* When `Instant` finally comes to the JPA Spec, I bet it will simply work just like Hibernate already do. So no change.
Upvotes: 1
|
2018/03/15
| 671 | 2,010 |
<issue_start>username_0: How can I return the result of an `sbt` task to `stdout`?
Say I have a task in sbt that returns a `java.io.File`, how can I run `sbt` return just the result of the task? Without any `[success]` messages or other logging?
e.g. if `sbt "show foo"` returns
```
...
[info] Lots of output noise
[info] /usr/local/foo/bar
[success] Total time: 43 s, completed ...
```
I want `echo $(sbt show foo)` to return something like
```
/usr/local/foo/bar
```
For context, the returned file will be used in a shell script, moved elsewhere etc.<issue_comment>username_1: Basically what you want to do, is to change the verbosity of sbt. You can do that by adding -error at the end of your command.
For example, if I have in my sbt:
```
val filePath = "/some/file/path"
lazy val getFilePath = taskKey[Unit]("A task that gets the file path")
getFilePath := {
println(filePath)
}
```
So when running the command:
```
sbt getFilePath -error
```
I get:
```
/some/file/path
```
Upvotes: 0 <issue_comment>username_2: You can use the `print` command similar to `show` to print result of any possible task and the `--error` global log level flag.
The format is following:
```sh
sbt "print " --error
```
SBT logging docs
* <https://www.scala-sbt.org/1.x/docs/Tasks.html#Streams%3A+Per-task+logging>
* <https://www.scala-sbt.org/1.x/docs/Howto-Logging.html>
**Simple example.**
*build.sbt*
```scala
lazy val foo = taskKey[String]("foo")
foo := "Foo"
```
*In terminal*
```sh
$ sbt "print foo" --error
Foo
```
**More sophisticated example.**
Detecting a fat-jar produced by [sbt-assembly](https://github.com/sbt/sbt-assembly).
*src/main/scala/Main.scala*
```scala
object Main {
def main(args: Array[String]): Unit = {
println("Hello world!")
}
}
```
*plugins.sbt*
```scala
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.15.0")
```
*In terminal*
```sh
$ java -jar $(sbt "print assembly::assemblyOutputPath" --error)
Hello world!
```
Upvotes: 2 [selected_answer]
|
2018/03/15
| 1,617 | 6,269 |
<issue_start>username_0: I'm trying to restrict Users by role to access only particular folders within an S3 bucket. The bucket is configured as "mock mountable" so to speak so that we can use it for file sharing as if it were a more traditional server. Each user is using CloudBerry to access S3 remotely.
Here's my current (broken) policy, and bucket name is "bluebolt".
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Sid": "AllowRootAndHomeListingOfCompanySharedAndPAndP",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bluebolt"
],
"Condition": {
"StringEquals": {
"s3:prefix": [
"",
"Production and Processing/",
"Production and Processing/${aws:username}",
"Company Shared/"
],
"s3:delimiter": [
"/"
]
}
}
},
{
"Sid": "AllowListingOfCompanyShared",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bluebolt"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"Company Shared/*"
]
}
}
},
{
"Sid": "AllowListingOfUserFolder",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bluebolt"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"Production and Processing/${aws:username}/",
"Production and Processing/${aws:username}/*"
]
}
}
},
{
"Sid": "AllowAllS3ActionsCompanyShared",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bluebolt/Company Shared/*"
]
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bluebolt/Production and Processing/${aws:username}/*"
]
},
{
"Sid": "DenyAllS3ActionsInManagement",
"Effect": "Deny",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bluebolt/Management/*"
]
}
]
```
}
So, what I want to do is to restrict users to list/read/write only what is in "/Production and Processing/[UserName]", along with being able to list/read everything in "/Company Shared" while specifically prohibiting all access to "/Management" as well as everything in "/Production and Processing/\*" except their user folder. Ideally a user would only see "/Company Shared" and "/Production and Processing" in bluebolt, and once they get into "/Production and Processing", they'd only see their user-named folder which is their workspace.
Right now, I am getting sporadic access by users ("You do not have permission to access") once they dig below the bluebolt top level bucket.
I don't know if this use case is common or if I'm trying to fit too-square a peg into a round hole, but any feedback/tips/similar policy applications/harsh criticism is welcome and greatly appreciated!<issue_comment>username_1: [IAM policy variables with federated users](https://aws.amazon.com/premiumsupport/knowledge-center/iam-policy-variables-federated/)
${aws:userName} policy variable will not work for roles. Use the ${aws:userID} policy variable instead of the ${aws:userName} policy variable.
${aws:userid} variable will be "ROLEID:caller-specified-name".
I used the same policy with `aws:userid` and a role.
1. Get Role ID.
```
iam get-role --role-name Arsenal-role --query Role.RoleId
AROAXXT2NJT7D3SIQN7Z6
```
2. Ask your users upload into `Bucket/Prefix//`
```
aws s3 cp test.txt 's3://mydemo/Production and Processing/AROAXXT2NJT7D3SIQN7Z6:john/' --profile s3role
upload: ./test.txt to s3://mydemo/Production and Processing/AROAXX2NJT7D3SIQN7Z6:john/test.txt
aws s3 cp test.txt 's3://mydemo/Management/' --profile s3role
upload failed: ./test.txt to s3://mydemo/Management/test.txt An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
aws s3 cp test.txt 's3://mydemo/Production and Processing/' --profile s3role
upload failed: ./test.txt to s3://mydemo/Production and Processing An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
```
Upvotes: 1 <issue_comment>username_2: Here's the code I got to work.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bluebolt"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"*",
"bluebolt/Company Shared/*",
"bluebolt/Production and Processing/*",
"bluebolt/Production and Processing/${aws:userName}/*"
]
}
}
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bluebolt/Production and Processing/${aws:userName}/*"
]
},
{
"Sid": "AllowCertainS3ActionsInCompanyShared",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bluebolt/Company Shared/*"
]
}
]
```
}
Upvotes: 1 [selected_answer]
|
2018/03/15
| 345 | 1,154 |
<issue_start>username_0: I tried many different codes, But all of them only returned local IP address of the user. (e.g. 192.168.1.2).
How may I obtain user's public IPv4 address using WebRTC? (e.g. 192.168.127.12)
I know **it's possible** because a [website](https://www.expressvpn.com/webrtc-leak-test) could show my leaked public IPv4 address even when I used VPN.<issue_comment>username_1: Don't get me wrong but the client (where the browser is running) might not know his public IP. I would somehow host a small php on a server in the internet which prints the $\_SERVER['REMOTE\_ADDR'], call it from JS, gather the address back into a JS var...sounds dirty!
Edit: WRONG ANSWER, got the asking user wrong, somehow.
Upvotes: -1 <issue_comment>username_2: Here's a popular proof-of-concept:
<https://github.com/diafygi/webrtc-ips>
For me, it was only sending back my private IP. It appears you can get around this by changing the server IP:
<https://github.com/diafygi/webrtc-ips/issues/33>
The site you linked to uses the server address `stun:stun.l.google.com:19302`, which returned my public address as expected.
Upvotes: 3 [selected_answer]
|
2018/03/15
| 643 | 2,018 |
<issue_start>username_0: Following the instructions from the [official website](https://www.tensorflow.org/install/install_c) on MacOSX High Sierra.
I'm getting the following error:
```
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 17.9M 100 17.9M 0 0 13.8M 0 0:00:01 0:00:01 --:--:-- 13.8M
./: Can't set user=0/group=0 for .
tar: Error exit delayed from previous errors.
```
I'm following the Tensorflow C-API instructions. Once I download the file, I run the following command `sudo tar -xvf libtensorflow-cpu-darwin-x86_64-1.4.0.tar.gz -C /usr/local` which gives the error.<issue_comment>username_1: Not entirely sure why that error appears, but it seems it can be safely ignored. After running the command, I later noticed that the files do indeed get installed within `/usr/local`.
Also, it seems to be occuring only on more recent version of Mac(High Sierra) and didn't appear on El Capitan.
Upvotes: 0 <issue_comment>username_2: Looking at [this github comment](https://github.com/Homebrew/brew/issues/3228#issuecomment-332679274), looks like you can just chown "/usr/local" on High Sierra.
So, I solved this just:
1. run: `$ mkdir ~/tensorflor-clang && cd ~/tensorflor-clang`
2. run: `$ curl -L \
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.10.1.tar.gz" > tensorflow.tar.gz`
We'll create folder for save Tensorflow's source. Then...
run: `$ sudo chown $(whoami) /usr/local/*`, once we've chown'd "/usr/local", we must extract tarball, so we run `$ tar -xvf tensorflow.tar.gz`
Finally just copy extracted lib and include folders with `cp lib/ /usr/local/lib/` and `cp include/ /usr/local/include`
The problem is now that we must compile our C files with `gcc -I/usr/local/include -L/usr/local/lib YOUR_C_FILE.c -ltensorflow`
Upvotes: 1
|
2018/03/15
| 645 | 2,288 |
<issue_start>username_0: I have an aspnetcore webapp and I'd like it to write it's current version, as well as the dotnet core runtime version that it's running on to it's log when it starts up.
I'd like to do this as my webapp runs on various VM's in the cloud and I'd like to be able to look at the logs from all of them and make sure they're all running the same dotnet core runtime version.
What I want is something like this.
```
App version 1.0.1 running on dotnet 2.0.6
```
Getting my app version is easy (just assembly version), However, I can't find a way to get the dotnet runtime version?
I've seen various things referencing the [Microsoft.DotNet.PlatformAbstractions](https://www.nuget.org/packages/Microsoft.DotNet.PlatformAbstractions/) nuget package, however this doesn't appear to give me the dotnet runtime version at all.
There's also System.Environment.Version, but that reports `4.0.30319.42000` which is the ["desktop" dotnet 4.6+ framework version](https://stackoverflow.com/questions/12971881/how-to-reliably-detect-the-actual-net-4-5-version-installed), not the dotnet core version.<issue_comment>username_1: For a detailed description you can find the original article here:
<https://learn.microsoft.com/en-us/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed>
Along with the original github comment chain here: <https://github.com/dotnet/BenchmarkDotNet/issues/448>
```
public static string GetNetCoreVersion() {
var assembly = typeof(System.Runtime.GCSettings).GetTypeInfo().Assembly;
var assemblyPath = assembly.CodeBase.Split(new[] { '/', '\\' }, StringSplitOptions.RemoveEmptyEntries);
int netCoreAppIndex = Array.IndexOf(assemblyPath, "Microsoft.NETCore.App");
if (netCoreAppIndex > 0 && netCoreAppIndex < assemblyPath.Length - 2)
return assemblyPath[netCoreAppIndex + 1];
return null;
}
```
Upvotes: 4 <issue_comment>username_2: Since .NET Core 3.0, you can directly call improved API to get such information.
```cs
var netCoreVer = System.Environment.Version; // 3.0.0
var runtimeVer = System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription; // .NET Core 3.0.0-preview4.19113.15
```
Check out this [issue](https://github.com/dotnet/coreclr/issues/22844)
Upvotes: 7 [selected_answer]
|
2018/03/15
| 668 | 1,945 |
<issue_start>username_0: I have variables that looks like these:
```
data.head()
Ones Population Profit
0 1 6.1101 17.5920
1 1 5.5277 9.1302
2 1 8.5186 13.6620
3 1 7.0032 11.8540
4 1 5.8598 6.8233
X = data.iloc[:, 0:cols]
y = data.iloc[:, cols]
X1 = np.matrix(X.values)
y1 = np.matrix(y.values)
X.shape
>>(97, 2)
y.shape
>>(97,)
X1.shape
>>(97, 2)
y1.shape
>>(1, 97)
```
`data` is in pandas frame.
I expected the dimension of y1 would be 97 X 1, but instead it is 1 X 97. Somehow y1 was transposed in the middle, and I don't understand why this is happening. Since my original y panda array was 97 X 1, I thought y1 should be the same too, but apparently thats not how it works
Any explanations?<issue_comment>username_1: For a detailed description you can find the original article here:
<https://learn.microsoft.com/en-us/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed>
Along with the original github comment chain here: <https://github.com/dotnet/BenchmarkDotNet/issues/448>
```
public static string GetNetCoreVersion() {
var assembly = typeof(System.Runtime.GCSettings).GetTypeInfo().Assembly;
var assemblyPath = assembly.CodeBase.Split(new[] { '/', '\\' }, StringSplitOptions.RemoveEmptyEntries);
int netCoreAppIndex = Array.IndexOf(assemblyPath, "Microsoft.NETCore.App");
if (netCoreAppIndex > 0 && netCoreAppIndex < assemblyPath.Length - 2)
return assemblyPath[netCoreAppIndex + 1];
return null;
}
```
Upvotes: 4 <issue_comment>username_2: Since .NET Core 3.0, you can directly call improved API to get such information.
```cs
var netCoreVer = System.Environment.Version; // 3.0.0
var runtimeVer = System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription; // .NET Core 3.0.0-preview4.19113.15
```
Check out this [issue](https://github.com/dotnet/coreclr/issues/22844)
Upvotes: 7 [selected_answer]
|
2018/03/15
| 824 | 3,175 |
<issue_start>username_0: Im trying to put together and AngularDart application, and I'm going back and forth on the documentation as I go.
In the AngularDart official sites architecture page, it talks about the all-important AppModule
[AngularDart Architecture](https://webdev.dartlang.org/angular/guide/architecture)
However, this is the only place that modules are mentioned. In all other places - including sample code and tutorials the AppModule is completely missing - despite the fact that the Architecture guidance page insists that there needs to be a minimum of one module.
Can someone in the know clarify this?<issue_comment>username_1: Conceptually, `AppModule` is not a concrete thing, it is just "root-level dependency injection services" that you setup for your application. Some applications may have *none* and some may have many.
You'll notice in our *github\_issues* example application, we have one service in our "AppModule":
<https://github.com/dart-lang/angular/blob/master/examples/github_issues/web/main.dart>
```
import 'package:angular/angular.dart';
import 'package:examples.github_issues/api.dart';
import 'package:examples.github_issues/ui.dart';
import 'main.template.dart' as ng;
@Component(
selector: 'ng-app',
directives: const [
IssueListComponent,
],
template: '',
)
class NgAppComponent {}
void main() {
bootstrapStatic(
NgAppComponent,
const [
const ClassProvider(GithubService),
],
ng.initReflector,
);
}
```
... the `GithubService`. Any component (or service) hosted in this application can access it.
Does that help?
Upvotes: 2 <issue_comment>username_2: The [Router documentation](https://webdev.dartlang.org/angular/guide/router/4) contains a pretty good explanation on the Angular module pattern, and how to organize the main module and feature modules in a real-world Angular application.
>
> You’ll organize the crisis center to conform to the following
> recommended pattern for Angular apps:
>
>
> * Each feature area in its own folder
> * Each area with its own area root component
> * Each area root component with its own router outlet and child routes
> * Area routes rarely (if ever) cross
>
>
>
The `AppModule` itself is a stripped down component, the central hub for your application, if you will. It acts as a container for non-terminal feature module routes and application wide services. Like in the above linked doc:
```
@Component(
selector: 'my-app',
template: '''
Angular Router
==============
Crisis Center
Heroes
''',
styles: const ['.router-link-active {color: #039be5;}'],
directives: const [ROUTER_DIRECTIVES],
providers: const [HeroService],
)
@RouteConfig(const [
const Redirect(path: '/', redirectTo: const ['Heroes']),
const Route(
path: '/crisis-center/...',
name: 'CrisisCenter',
component: CrisisCenterComponent),
const Route(path: '/heroes', name: 'Heroes', component: HeroesComponent),
const Route(
path: '/hero/:id', name: 'HeroDetail', component: HeroDetailComponent),
const Route(path: '/**', name: 'NotFound', component: NotFoundComponent)
])
class AppComponent {}
```
Upvotes: 1
|
2018/03/15
| 934 | 3,622 |
<issue_start>username_0: I added a new property to a type mapping and I need to reindex all existing items of that type in order to use the new property.
Which API should I use to do this?<issue_comment>username_1: You have to use the [reindex api](https://www.elastic.co/guide/en/elasticsearch/reference/5.6/docs-reindex.html): first you have to create the new index, and then you can use the reidex api to transfer data from the source index into the new index.
Upvotes: 0 <issue_comment>username_2: If you are adding a new field, which it never existed before in your index you don't need to reindex, you only have to add the new field by using PUT Mapping API <http://nocf-www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html>
Documents you created before updating the mapping with the new field will not contain this new field, so searches or aggregations won't take into account this field, it will work as a missing field.
If you need that this new field is considered in searches on old documents using the default value of the type of the new field, then you need to reindex. For instance, if your new field is type integer and you explicitly need that this field be included in the old documents with zero value (default value) because you want to count how many documents have this new field = 0 then you need reindex, but most of the use cases we can consider missing fields as default value, so no need to reindex.
There is no way in ElastiSearch (ES) to add a new field in the mapping and old indexes be updated automatically, even with the default value for that new index due to the nature of how ES store data. ES uses immutable segments to store the indexes so when you update a document, ES doesn't update physically the fields which changed, but it creates a new copy of the old document updated with the new data and mark as deleted the old one, so even when you update a simple field in a document, you get a new version of the document and the old one is marked as deleted
Upvotes: 3 [selected_answer]<issue_comment>username_3: Another option is to create an index alias in Elasticsearch that your code will reference. If you need to make mapping changes, you can do the following that will allow little to no downtime.
* create new index with updated ES mapping config
* use the reindex api to copy the data to this new index
* delete old index alias and recreate it with the same name.
Upvotes: 2 <issue_comment>username_4: You need to execute those commands. Replace my\_index by the name of your index.
```
# First of all: enable blocks write to enable clonage
PUT /my_index/_settings
{
"settings": {
"index.blocks.write": true
}
}
# clone index into a temporary index
POST /my_index/_clone/my_index-000001
# Disable blocks write
PUT /my_index/_settings
{
"settings": {
"index.blocks.write": false
}
}
# Copy back all documents in the original index to force their reindexetion
POST /_reindex?wait_for_completion=false
{
"source": {
"index": "my_index-000001"
},
"dest": {
"index": "my_index"
}
}
# Copy the task id from the previous result and check the progression:
GET /_tasks/K1IOaNo8R26gRwc55yXJLQ:1165945865
# Finaly delete the temporary index
DELETE my_index-000001
```
What it does:
* clone your index into a temporary index
* then reindex all the documents from temporary to the original index. This will overwrite existing documents and regenerate the missing fields.
Caveats:
This method assumes that nothing insert new data in the index. New data would be overwrite during the reindex process.
Upvotes: 2
|
2018/03/15
| 3,624 | 9,551 |
<issue_start>username_0: I'm looking at QEMU's [edu device](https://github.com/qemu/qemu/blob/master/docs/specs/edu.txt) ([source](https://github.com/qemu/qemu/blob/master/hw/misc/edu.c)) which provides a basic "educational" PCI device within QEMU, that can be accessed as a PCI device from within a QEMU guest like Linux.
I have been trying to get this to work with the [UIO](https://www.kernel.org/doc/html/v4.12/driver-api/uio-howto.html) driver (Userspace I/O) via the UIO PCI generic driver, as an exercise to better understand PCI devices in both QEMU and Linux.
My overall goal is to implement a Linux driver for an FPGA. The FPGA is connected to an ARM Cortex-A53 CPU as a PCI-E device, providing several distinct blocks of memory that will be treated as registers for device configuration. I'm using an x86\_64 QEMU initially to become familiar with PCI drivers and hopefully UIO. Note: [vfio](https://www.kernel.org/doc/Documentation/vfio.txt) has been suggested to me however I believe this relies on IOMMU support which I'm not sure exists on my target platform.
I'm having some trouble with the memory region mapping. The UIO PCI driver is (I think) meant to create entries in `/sys/class/uio/uio0/map` for each addressable region, however from what I can tell, there are no regions being automatically detected or set up when the UIO driver is bound to the edu device.
I'm starting my freshly compiled QEMU (`./configure --target-list=x86_64-softmmu`) with a yocto-generated "fairly standard" Linux 4.9 x86\_64 distro:
```
$ ./x86_64-softmmu/qemu-system-x86_64 --device edu -m 512 -nographic -serial mon:stdio -append 'console=ttyS0 root=/dev/hda' -kernel bzImage -hda image-qemu.ext3
```
Then within the guest, the edu PCI device is detected:
```
# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)
00:04.0 Unclassified device [00ff]: Device 1234:11e8 (rev 10)
```
Loading the `uio_pci_generic` module and binding it to the edu device:
```
# modprobe uio_pci_generic
# echo "1234 11e8" > /sys/bus/pci/drivers/uio_pci_generic/new_id
# ls -l /sys/bus/pci/devices/0000\:00\:04.0/driver
lrwxrwxrwx 1 root root 0 Mar 15 01:50 /sys/bus/pci/devices/0000:00:04.0/driver -> ../../../bus/pci/drivers/uio_pci_generic
```
Taking a closer look at the device, noting the memory address `fea00000`:
```
# lspci -v -s 00:04.0
00:04.0 Unclassified device [00ff]: Device 1234:11e8 (rev 10)
Subsystem: Red Hat, Inc Device 1100
Flags: fast devsel, IRQ 10
Memory at fea00000 (32-bit, non-prefetchable) [size=1M]
Capabilities: [40] MSI: Enable- Count=1/1 Maskable- 64bit+
Kernel driver in use: uio_pci_generic
```
I built [lsuio](http://www.osadl.org/projects/downloads/UIO/user/lsuio-0.2.0.tar.gz) from source:
```
# ./lsuio -m -v
uio0: name=uio_pci_generic, version=0.01.0, events=0
Device attributes:
vendor=0x1234
uevent=DRIVER=uio_pci_generic
subsystem_vendor=0x1af4
subsystem_device=0x1100
resource=0x00000000fea00000 0x00000000feafffff 0x0000000000040200
msi_bus=1
modalias=pci:v00001234d000011E8sv00001AF4sd00001100bc00scFFi00
local_cpus=1
local_cpulist=0
irq=10
enable=1
driver_override=(null)
dma_mask_bits=32
device=0x11e8
d3cold_allowed=0
consistent_dma_mask_bits=32
config=4è
class=0x00ff00
broken_parity_status=0
# ls /sys/class/uio/uio0/ -l
total 0
-r--r--r-- 1 root root 4096 Mar 15 01:53 dev
lrwxrwxrwx 1 root root 0 Mar 15 01:53 device -> ../../../0000:00:04.0
-r--r--r-- 1 root root 4096 Mar 15 01:53 event
-r--r--r-- 1 root root 4096 Mar 15 01:53 name
drwxr-xr-x 2 root root 0 Mar 15 01:53 power
lrwxrwxrwx 1 root root 0 Mar 15 01:53 subsystem -> ../../../../../class/uio
-rw-r--r-- 1 root root 4096 Mar 15 01:22 uevent
-r--r--r-- 1 root root 4096 Mar 15 01:53 version
```
According to this, there should be a mappable region starting at `0xfea00000` I think, but no "map" directory appears and I haven't been able to work out why. Attempting to access `/dev/uio0` (read or mmap) results in Error 22: "Invalid argument". Opening the file and scanning to the end shows that the block device has zero size.
Firstly, do I need to manually create these region mappings, or should the UIO driver be setting these up automatically? Does the edu device need to do something extra to make this happen?
Secondly, are there other QEMU PCI devices that are known to work with UIO? Ideally something with a working Linux driver so I can try to understand both the QEMU device side and the corresponding Linux driver side.
On that last note, is anyone aware of a working Linux driver for the edu device?<issue_comment>username_1: Turns out that the [documentation](https://01.org/linuxgraphics/gfx-docs/drm/driver-api/uio-howto.html#writing-userspace-driver-using-uio-pci-generic) is slightly ambiguous, enough to confuse at least myself and one other person:
This [long and windy thread](https://www.spinics.net/lists/kvm/msg73837.html) explains that the `ui_pci_generic` driver doesn't actually map PCI BAR regions to the `maps` directory. Instead, the intention is that the standard PCI sysfs interfaces be used:
Therefore I have been able to access the PCI device's memory via a mmap of `/sys/class/uio/uio0/device/resource0`.
However trying to do a blocking read on `/dev/uio0` still results in an "Invalid Argument" error, so I'm not yet sure how to wait for or handle interrupts using this sysfs interface.
Upvotes: 2 <issue_comment>username_2: To add to OP answer, it seems that `/dev/uio0` is used for receiving and counting interrupts on `uio_pci_generic` module.
An [example code of using `uio_pci_generic`](https://www.kernel.org/doc/html/v4.13/driver-api/uio-howto.html#example-code-using-uio-pci-generic) shows this:
```c
uiofd = open("/dev/uio0", O_RDONLY);
...
/* Wait for next interrupt. */
err = read(uiofd, &icount, 4);
```
`icount` is the number of interrupts received.
When using [qemu's `edu` device](https://github.com/qemu/qemu/blob/master/docs/specs/edu.txt), you can use `resource0` to access the mapped IO, and use `/dev/uio0` to wait for an interrupt.
Here is a user-space example (extension of the [example code](https://www.kernel.org/doc/html/v4.13/driver-api/uio-howto.html#example-code-using-uio-pci-generic) above) of using `uio_pci_generic` to write and read the "card liveness check" of `edu` device that inverts the input, and to trigger `edu` interrupts by writing to "interrupt raise register":
```c
#include
#include
#include
#include
#include
#include
#include
#include
#include
#define EDU\_IO\_SIZE 0x100
#define EDU\_CARD\_VERSION\_ADDR 0x0
#define EDU\_CARD\_LIVENESS\_ADDR 0x1
#define EDU\_RAISE\_INT\_ADDR 0x18
#define EDU\_CLEAR\_INT\_ADDR 0x19
int main()
{
int uiofd;
int configfd;
int bar0fd;
int resetfd;
int err;
int i;
unsigned icount;
unsigned char command\_high;
volatile uint32\_t \*bar0;
uiofd = open("/dev/uio0", O\_RDWR);
if (uiofd < 0) {
perror("uio open:");
return errno;
}
configfd = open("/sys/class/uio/uio0/device/config", O\_RDWR);
if (configfd < 0) {
perror("config open:");
return errno;
}
/\* Read and cache command value \*/
err = pread(configfd, &command\_high, 1, 5);
if (err != 1) {
perror("command config read:");
return errno;
}
command\_high &= ~0x4;
/\* Map edu's MMIO \*/
bar0fd = open("/sys/class/uio/uio0/device/resource0", O\_RDWR);
if (bar0fd < 0) {
perror("bar0fd open:");
return errno;
}
/\* mmap the device's BAR \*/
bar0 = (volatile uint32\_t \*)mmap(NULL, EDU\_IO\_SIZE, PROT\_READ|PROT\_WRITE, MAP\_SHARED, bar0fd, 0);
if (bar0 == MAP\_FAILED) {
perror("Error mapping bar0!");
return errno;
}
fprintf(stdout, "Version = %08X\n", bar0[EDU\_CARD\_VERSION\_ADDR]);
/\* Test the invertor function \*/
i = 0x12345678;
bar0[EDU\_CARD\_LIVENESS\_ADDR] = i;
fprintf(stdout, "Inversion: %08X --> %08X\n", i, bar0[EDU\_CARD\_LIVENESS\_ADDR]);
/\* Clear previous interrupt \*/
bar0[EDU\_CLEAR\_INT\_ADDR] = 0xABCDABCD;
/\* Raise an interrupt \*/
bar0[EDU\_RAISE\_INT\_ADDR] = 0xABCDABCD;
for(i = 0;; ++i) {
/\* Print out a message, for debugging. \*/
if (i == 0)
fprintf(stderr, "Started uio test driver.\n");
else
fprintf(stderr, "Interrupts: %d\n", icount);
/\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*/
/\* Here we got an interrupt from the
device. Do something to it. \*/
/\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*/
/\* Re-enable interrupts. \*/
err = pwrite(configfd, &command\_high, 1, 5);
if (err != 1) {
perror("config write:");
break;
}
/\* Clear previous interrupt \*/
bar0[EDU\_CLEAR\_INT\_ADDR] = 0xABCDABCD;
/\* Raise an interrupt \*/
bar0[EDU\_RAISE\_INT\_ADDR] = 0xABCDABCD;
/\* Wait for next interrupt. \*/
err = read(uiofd, &icount, 4);
if (err != 4) {
perror("uio read:");
break;
}
}
return errno;
}
```
The result looks something like this:
```
Version = 010000ED
Inversion: 12345678 --> EDCBA987
Started uio test driver.
Interrupts: 3793548
Interrupts: 3793549
Interrupts: 3793550
Interrupts: 3793551
Interrupts: 3793552
Interrupts: 3793553
Interrupts: 3793554
Interrupts: 3793555
Interrupts: 3793556
...
```
Upvotes: 0
|
2018/03/15
| 1,912 | 5,099 |
<issue_start>username_0: I have a `QUERY` formula that imports the first three rows of some data, `"Select Col1, Col2 Order by Col2 desc limit 3"`, producing an output like this:
```
| c1 | c2
r1 | Red | 53
r2 | Blue | 45
r3 | Yellow | 15
```
I'd like to get need those same formula values to look like this:
```
| c1 | c2 | c3 | c4 | c5 | c6
r1 | Red | 53 | Blue | 45 | Yellow | 15
```
where the whole output is in a single row.
Is this possible?<issue_comment>username_1: Turns out that the [documentation](https://01.org/linuxgraphics/gfx-docs/drm/driver-api/uio-howto.html#writing-userspace-driver-using-uio-pci-generic) is slightly ambiguous, enough to confuse at least myself and one other person:
This [long and windy thread](https://www.spinics.net/lists/kvm/msg73837.html) explains that the `ui_pci_generic` driver doesn't actually map PCI BAR regions to the `maps` directory. Instead, the intention is that the standard PCI sysfs interfaces be used:
Therefore I have been able to access the PCI device's memory via a mmap of `/sys/class/uio/uio0/device/resource0`.
However trying to do a blocking read on `/dev/uio0` still results in an "Invalid Argument" error, so I'm not yet sure how to wait for or handle interrupts using this sysfs interface.
Upvotes: 2 <issue_comment>username_2: To add to OP answer, it seems that `/dev/uio0` is used for receiving and counting interrupts on `uio_pci_generic` module.
An [example code of using `uio_pci_generic`](https://www.kernel.org/doc/html/v4.13/driver-api/uio-howto.html#example-code-using-uio-pci-generic) shows this:
```c
uiofd = open("/dev/uio0", O_RDONLY);
...
/* Wait for next interrupt. */
err = read(uiofd, &icount, 4);
```
`icount` is the number of interrupts received.
When using [qemu's `edu` device](https://github.com/qemu/qemu/blob/master/docs/specs/edu.txt), you can use `resource0` to access the mapped IO, and use `/dev/uio0` to wait for an interrupt.
Here is a user-space example (extension of the [example code](https://www.kernel.org/doc/html/v4.13/driver-api/uio-howto.html#example-code-using-uio-pci-generic) above) of using `uio_pci_generic` to write and read the "card liveness check" of `edu` device that inverts the input, and to trigger `edu` interrupts by writing to "interrupt raise register":
```c
#include
#include
#include
#include
#include
#include
#include
#include
#include
#define EDU\_IO\_SIZE 0x100
#define EDU\_CARD\_VERSION\_ADDR 0x0
#define EDU\_CARD\_LIVENESS\_ADDR 0x1
#define EDU\_RAISE\_INT\_ADDR 0x18
#define EDU\_CLEAR\_INT\_ADDR 0x19
int main()
{
int uiofd;
int configfd;
int bar0fd;
int resetfd;
int err;
int i;
unsigned icount;
unsigned char command\_high;
volatile uint32\_t \*bar0;
uiofd = open("/dev/uio0", O\_RDWR);
if (uiofd < 0) {
perror("uio open:");
return errno;
}
configfd = open("/sys/class/uio/uio0/device/config", O\_RDWR);
if (configfd < 0) {
perror("config open:");
return errno;
}
/\* Read and cache command value \*/
err = pread(configfd, &command\_high, 1, 5);
if (err != 1) {
perror("command config read:");
return errno;
}
command\_high &= ~0x4;
/\* Map edu's MMIO \*/
bar0fd = open("/sys/class/uio/uio0/device/resource0", O\_RDWR);
if (bar0fd < 0) {
perror("bar0fd open:");
return errno;
}
/\* mmap the device's BAR \*/
bar0 = (volatile uint32\_t \*)mmap(NULL, EDU\_IO\_SIZE, PROT\_READ|PROT\_WRITE, MAP\_SHARED, bar0fd, 0);
if (bar0 == MAP\_FAILED) {
perror("Error mapping bar0!");
return errno;
}
fprintf(stdout, "Version = %08X\n", bar0[EDU\_CARD\_VERSION\_ADDR]);
/\* Test the invertor function \*/
i = 0x12345678;
bar0[EDU\_CARD\_LIVENESS\_ADDR] = i;
fprintf(stdout, "Inversion: %08X --> %08X\n", i, bar0[EDU\_CARD\_LIVENESS\_ADDR]);
/\* Clear previous interrupt \*/
bar0[EDU\_CLEAR\_INT\_ADDR] = 0xABCDABCD;
/\* Raise an interrupt \*/
bar0[EDU\_RAISE\_INT\_ADDR] = 0xABCDABCD;
for(i = 0;; ++i) {
/\* Print out a message, for debugging. \*/
if (i == 0)
fprintf(stderr, "Started uio test driver.\n");
else
fprintf(stderr, "Interrupts: %d\n", icount);
/\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*/
/\* Here we got an interrupt from the
device. Do something to it. \*/
/\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*/
/\* Re-enable interrupts. \*/
err = pwrite(configfd, &command\_high, 1, 5);
if (err != 1) {
perror("config write:");
break;
}
/\* Clear previous interrupt \*/
bar0[EDU\_CLEAR\_INT\_ADDR] = 0xABCDABCD;
/\* Raise an interrupt \*/
bar0[EDU\_RAISE\_INT\_ADDR] = 0xABCDABCD;
/\* Wait for next interrupt. \*/
err = read(uiofd, &icount, 4);
if (err != 4) {
perror("uio read:");
break;
}
}
return errno;
}
```
The result looks something like this:
```
Version = 010000ED
Inversion: 12345678 --> EDCBA987
Started uio test driver.
Interrupts: 3793548
Interrupts: 3793549
Interrupts: 3793550
Interrupts: 3793551
Interrupts: 3793552
Interrupts: 3793553
Interrupts: 3793554
Interrupts: 3793555
Interrupts: 3793556
...
```
Upvotes: 0
|
2018/03/15
| 514 | 1,627 |
<issue_start>username_0: How in Scala can you reference a variable to another? E.g.
```
import scala.collection.mutable.Set
var a = Set(1)
var b = a
a = a + 2
// a = Set(1, 2)
// b = Set(1)
```
I'd want `b` to point to the "updated" `a`.<issue_comment>username_1: Let's see what's going on:
```
scala> var a = Set(1)
a: scala.collection.mutable.Set[Int] = Set(1)
scala> var b = a
b: scala.collection.mutable.Set[Int] = Set(1)
scala> a eq b
res9: Boolean = true
```
That means the two variables reference the same value (in this case, mutable reference because you used `var`).
Now let's see what happens if we do `a = a + 2`:
```
scala> a = a + 2
a: scala.collection.mutable.Set[Int] = Set(1, 2)
scala> a eq b
res10: Boolean = false
```
What we should use is the `+=` operator which, in the case of `a += 2`, will be desugared by the compiler as `a.+=(2)` (the operation `+=` from `Set`) maintaining the reference of `a` and making the change in place. If you do `a = a + 2` instead you are changing the object referenced by `a` to a new one `a + 2` and hence changing the hashcode. And, as the data structure you are using is mutable, you can change the mutable references by immutable ones using `val`s.
Upvotes: 2 <issue_comment>username_2: If you use `def` instead of `var` like this
```
def b = a
```
then accessing `b` will always return the latest value of `a`. But it is better to use `val` and update the `Set` in-place as explained in other answers to your question.
Better still, consider how you might use a more functional style and avoid having values that change over time like this.
Upvotes: 2
|
2018/03/15
| 738 | 3,046 |
<issue_start>username_0: I am exploring Azure Data Lake and I am new to this field. I explored many things and read many articles. Basically I have to develop **Power BI dashboard** from data of different sources.
In classic SQL Server stack I can write an ETL (Extract, Transform, Load) process to bring the data from my system databases into the Data Warehouse database. Then use that Data Warehouse with Power BI by using SSAS etc.
But I want to use Azure Data Lake and I explored Azure Data Lake Store and Azure Data Lake Analytic(U-SQL). I draw following architecture diagram.
[](https://i.stack.imgur.com/ujYnV.png)
1. Is there any thing which I am missing in current flow of the
application?
2. I can get data directly from Azure Data Lake using
Power BI so there is no need of Data Warehouse. Am I right?
3. I can create a database in Azure Data Lake is that will be my Data Warehouse?
4. What will be the best format for the Output data from Original file in Azure Data Lake e.g .csv?<issue_comment>username_1: **1 & 2)** Currently ADLS only has limited support for allowing PowerBI to query directly over it. If your data is too large (greater than about 10GB I believe), then PowerBI cannot work directly over data in your ADLS account. In this case, I would recommend either moving your processed data in ADLS to a SQL Database or SQL Data Warehouse, as this allows for PowerBI to operate over larger amounts of data. You can use Azure Data Factory to move your data, or Polybase if moving data into SQL DW.
**3)** A data lake is still distinct from a data warehouse, and they have separate strengths and weaknesses. The data lake is best for storing your raw or slightly processed data, which may have a variety of formats and schemas. After you process and filter this data using Azure Data Lake Analytics, you can move that data into SQL DW for interactive analytics and data management (but at the cost of inflexibility of schema).
**4)** Depends on your use case. If you plan on continuing to process the data in ADLS, I recommend you output into an ADLS table for greater performance. However, if you need to pass this data into another service, then CSV is a good choice. You can find more outputters on our [GitHub](https://github.com/Azure/usql/tree/master/Examples/DataFormats) such as JSON and XML.
Upvotes: 4 [selected_answer]<issue_comment>username_2: This answer may not be timely, but what I've tried that is more similar to your prior experience is spin up an instance of Azure Analysis Service. You can create a tabular model or mdx model, shove a ton of data into memory and connect to it from power bi. The "only" catch is that it can get pricey quick. One great thing about AAS is that the interface to build a tabular model nearly follows power query and uses dax.
Also I believe these days adla store is basically gone in favor of using blob storage directly, so basically you'd go data --> blob --> dla --> aas --> pbi.
Upvotes: 2
|
2018/03/15
| 811 | 3,034 |
<issue_start>username_0: In R, I wish to extract each csv file in my directory, one at a time, as data frames and perform a sum for each item
For example in Path/data I have the following 4 files:
```
View_Mag_2018_03_01
View_Mag_2018_03_02
View_Mag_2018_03_03
View_Mag_2018_03_04
```
Each file has a dataframe that looks something like this:
```
place number
1 chamber1 1
2 chamber2 1
3 chamber3 2
4 chamber4 4
5 chamber1 1
6 chamber3 3
```
I would like create 4 data frame (chamber1, chamber2, chamber3, chamber4) with for each data frame the sum by number for the second column and the date extracted from the csv file name for the first column :
Example with chamber1 df:
```
date sum
1 O1/03/2018 2
```
Example with chamber2 df:
```
date sum
1 O1/03/2018 1
```
And so on for the 4 created data frames and so on with all directory files adding row on these 4 data frames
Thanks for your help<issue_comment>username_1: **1 & 2)** Currently ADLS only has limited support for allowing PowerBI to query directly over it. If your data is too large (greater than about 10GB I believe), then PowerBI cannot work directly over data in your ADLS account. In this case, I would recommend either moving your processed data in ADLS to a SQL Database or SQL Data Warehouse, as this allows for PowerBI to operate over larger amounts of data. You can use Azure Data Factory to move your data, or Polybase if moving data into SQL DW.
**3)** A data lake is still distinct from a data warehouse, and they have separate strengths and weaknesses. The data lake is best for storing your raw or slightly processed data, which may have a variety of formats and schemas. After you process and filter this data using Azure Data Lake Analytics, you can move that data into SQL DW for interactive analytics and data management (but at the cost of inflexibility of schema).
**4)** Depends on your use case. If you plan on continuing to process the data in ADLS, I recommend you output into an ADLS table for greater performance. However, if you need to pass this data into another service, then CSV is a good choice. You can find more outputters on our [GitHub](https://github.com/Azure/usql/tree/master/Examples/DataFormats) such as JSON and XML.
Upvotes: 4 [selected_answer]<issue_comment>username_2: This answer may not be timely, but what I've tried that is more similar to your prior experience is spin up an instance of Azure Analysis Service. You can create a tabular model or mdx model, shove a ton of data into memory and connect to it from power bi. The "only" catch is that it can get pricey quick. One great thing about AAS is that the interface to build a tabular model nearly follows power query and uses dax.
Also I believe these days adla store is basically gone in favor of using blob storage directly, so basically you'd go data --> blob --> dla --> aas --> pbi.
Upvotes: 2
|
2018/03/15
| 717 | 2,542 |
<issue_start>username_0: I have created a grid with some images and want to update the number of columns on certain widths.
For example:
* width > 960px: 3 columns
* width > 660px: 2 columns
* width < 660px: 1 column
I've created a function to update the number of rows, but how can I rerender the grid without pressing F5?
My states:
```
this.state = {
numberOfColumns: 3,
breakpointSm: 660px,
breakpointMd: 960px,
};
```
My function:
```
getNumberOfColumns(){
let smallDevices = window.matchMedia( "(max-width: " + this.state.breakpointSm + ")" );
let mediumDevices = window.matchMedia( "(max-width:" + this.state.breakpointMd + ")" );
let columns;
if (smallDevices.matches) {
columns = 1;
} else if (mediumDevices.matches){
columns = 2;
} else {
columns = 3;
}
this.setState({
numberOfColumns: columns
});
}
```
Component will mount:
```
componentWillMount() {
this.getNumberOfColumns();
}
```<issue_comment>username_1: a simple way to slove the issue is to added an event listner on resize or simply use a plain css what this code actually do is to force a re-render every time the window size change
```
getNumberOfColumns() {
let columns;
if(window.innerWidth === this.state.breakpointSm) {
columns = 1;
} else if(window.innerWidth === this.state.breakpointMd) {
columns = 2;
}else{
columns = 3;
}
this.setState({
numberOfColumns: columns
});
}
componentDidMount() {
this.getNumberOfColumns();
window.addEventListener("resize", this.getNumberOfColumns.bind(this));
}
componentWillUnmount() {
window.removeEventListener("resize", this.getNumberOfColumns.bind(this));
}
```
and in the worst case scenario if nothing work , you can use `this.forceUpdate()` but try to avoid it as much as you can
i didnt test it yet , but i hope this will help
Upvotes: 3 [selected_answer]<issue_comment>username_2: I would advice using CSS for it, it approach seems more approprate for me. Re-rendering component to do something that can be achieved with media queries in CSS is an overkill IMHO.
CSS:
```
//for medium screens
.container {
display: grid;
grid-template-rows: 1fr 1fr;
grid-template-columns: auto;
//for tiny screens
@media screen and (max-width: 660px) {
grid-template-rows: 1fr;
}
//for big screens
@media screen and (min-width: 960px) {
grid-template-rows: 1fr 1fr 1fr;
}
}
```
Upvotes: 1
|
2018/03/15
| 848 | 3,406 |
<issue_start>username_0: I'm working on a custom HTML class, which can be used as an additional type of form that supports any element matching the `[name]` and `[value]` attributes.
However, when extending a class, I found out that I couldn't get the pseudo-select `:invalid` working.
```
class HTMLInlineInputElement extends HTMLElement{
constructor(){
super();
this.addEventListener('input', function(event){
this.value = this.innerHTML;
})
}
connectedCallback(){}
get name(){
return this.getAttribute('name');
}
set name(value){
return this.setAttribute('name', value);
}
get value(){
return this.getAttribute('value');
}
set value(value){
return this.setAttribute('value', value);
}
get valid(){
return this.validity.valid;
}
set valid(value){
return this.validity.valid = value;
}
checkValidity(){
if( this.hasAttribute('required') && this.value == null || this.value.length == 0 ){
console.log('req');
this.valid = false;
} else if( this.hasAttribute('pattern') && this.value.match( this.getAttribute('pattern') ) ){
console.log('patt');
this.valid = false;
}
}
}
if( typeof customElements !== 'undefined' ){
customElements.define('inline-form', HTMLInlineFormElement);
customElements.define('inline-input', HTMLInlineInputElement);
} else {
document.createElement('inline-form', HTMLInlineFormElement);
document.createElement('inline-input', HTMLInlineInputElement);
}
```
In a nutshell: I want to add the `HTMLElement.invalid = true;` functionality to my class so I can use the `:invalid` selector in CSS. What can I do to add `:is-selectors` to my new class?<issue_comment>username_1: a simple way to slove the issue is to added an event listner on resize or simply use a plain css what this code actually do is to force a re-render every time the window size change
```
getNumberOfColumns() {
let columns;
if(window.innerWidth === this.state.breakpointSm) {
columns = 1;
} else if(window.innerWidth === this.state.breakpointMd) {
columns = 2;
}else{
columns = 3;
}
this.setState({
numberOfColumns: columns
});
}
componentDidMount() {
this.getNumberOfColumns();
window.addEventListener("resize", this.getNumberOfColumns.bind(this));
}
componentWillUnmount() {
window.removeEventListener("resize", this.getNumberOfColumns.bind(this));
}
```
and in the worst case scenario if nothing work , you can use `this.forceUpdate()` but try to avoid it as much as you can
i didnt test it yet , but i hope this will help
Upvotes: 3 [selected_answer]<issue_comment>username_2: I would advice using CSS for it, it approach seems more approprate for me. Re-rendering component to do something that can be achieved with media queries in CSS is an overkill IMHO.
CSS:
```
//for medium screens
.container {
display: grid;
grid-template-rows: 1fr 1fr;
grid-template-columns: auto;
//for tiny screens
@media screen and (max-width: 660px) {
grid-template-rows: 1fr;
}
//for big screens
@media screen and (min-width: 960px) {
grid-template-rows: 1fr 1fr 1fr;
}
}
```
Upvotes: 1
|
2018/03/15
| 957 | 3,382 |
<issue_start>username_0: * Jmeter version 4.0 r1823414,
* Selenium/WebDriver Support plugin version 2.3,
* java version 1.8.0\_161.
While working with the *Jmeter WebDriver* sampler I was really impressed by the things it can do (quickly creating scripts for UI / performance testing). However, I wasn't able to utilize `List` (from `java.util`) interface while locating several web elements on the page. I did, as it was suggested in every *WebDriver* plugin tutorials, imported necessary namespaces `java.util.*;` and `java.lang.util.*;`.
In the Log Viewer there is an error
>
> messageERROR c.g.j.p.w.s.WebDriverSampler: In file: inline evaluation of: ``import java.util.*; import java.lang.*; import org.openqa.selenium.\*; import org . . . '' Encountered "=" at line 57, column 38.
>
>
>
where my line 57 looks like this
```
List deleteLinks = WDS.browser.findElements(By.xpath("${myEnumeratorDeclaredVariable}"));
```
According to [Selenium docs](https://seleniumhq.github.io/selenium/docs/api/java/) method `findElements(By by)` has a signature of `java.util.List` My question today is: is there any specifically designed limitations with the WebDriver sampler plugin that prohibits of using some interfaces and classes from the native Java world?
Can I, as usual, create classes and methods inside the sampler? Please advise of someone as well interfered with these problems before.
**P.S.** <NAME> was accepted. Please read comments below his response. If you are planning to use Java as your programming language for WebDriver sampler, be advised that it's a BeanShell interpretation of the Java, not the vanilla Java itself. For more modern features of the language (versions 6,7 and 8) please switch to groovy as an option as it supports 99% of the modern Java code.<issue_comment>username_1: If you use last version of Webdriver Sampler you need to select Groovy to be able to use the syntax you’re using.
Otherwise, by default it will use Javascript Rhino.
If issue persists, please show all your code.
Upvotes: 0 <issue_comment>username_2: Depending on what language you're using:
1. In case of default JavaScript you need to use JavaScript syntax like:
```
var deleteLinks = WDS.browser.findElements(org.openqa.selenium.By.xpath("${myEnumeratorDeclaredVariable}"));
```
Demo:
[](https://i.stack.imgur.com/Q2iWL.png)
2. In case of Java you don't get "vanilla" Java, you get [Beanshell](http://www.beanshell.org/) interpreter which doesn't support the [diamond operator](https://www.javaworld.com/article/2074080/core-java/jdk-7--the-diamond-operator.html) so you need to remove it like:
```
List deleteLinks = WDS.browser.findElements(org.openqa.selenium.By.xpath("${myEnumeratorDeclaredVariable}"));
```
[](https://i.stack.imgur.com/0L5iq.png)
3. Don't reference JMeter functions and/or variables in script like `"${myEnumeratorDeclaredVariable}"`, go for `WDS.vars` instead like:
```
List deleteLinks = WDS.browser.findElements(org.openqa.selenium.By.xpath(WDS.vars.get("myEnumeratorDeclaredVariable")));
```
More information: [The WebDriver Sampler: Your Top 10 Questions Answered](https://www.blazemeter.com/blog/webdriver-sampler-your-top-10-questions-answered)
Upvotes: 2 [selected_answer]
|
2018/03/15
| 7,273 | 16,940 |
<issue_start>username_0: Good afternoon,
I'm new to VSTS Build Management. I'm attempting to take our current solution (which builds fine on our dev machines) and get a successful build within VSTS Build Manager. I'm failing horribly so far and would like just a direction i should be looking as i bet this is some simple configuration that is missing either within VSTS or within my solution.
I have the following structure in source control
1. **$/Core/Core/Solution** <- .sln lives here
2. **$/Core/Core/Solution/assemblies** <- 3rd party DLL here
3. **$/Core/Core/Solution/Project/** <- .csproj lives here
My projects reference the DLL in the assemblies folder whihc is checked into source control and sits at the root with the .sln file
In my .csproj file i can verify that it contains the following entry (I'm going to focus on one of the DLL's i'm trying to reference, however multiple DLLs in the assemblies folder are giving me trouble)
```
..\assemblies\ScientiaMobile.WurflCloud.dll
```
When taking a look at the "Get Sources" Logs i see the following for the assemblies folder:
```
> 2018-03-18T00:27:00.1610080Z D:\a\3\s\assemblies:
> 2018-03-18T00:27:00.1610395Z Getting README.md
```
There are about a dozen other DLL's in that folder that it **does not** seem to be pulling down for source control (unless i am missing something)
I have even attempted to provide MSBuild Arguments for the Build Solution phase of the VSTS Build:
>
> /p:DeployOnBuild=true /p:WebPublishMethod=Package
> /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true
> /p:PackageLocation="$(build.artifactstagingdirectory)\"
> /p:ReferencePath="..\assemblies"
>
>
>
Still i get the following error:
>
> 018-03-18T00:28:28.2671124Z ##[error]Project\Global\Wurfl.cs(5,7):
> Error CS0246: The type or namespace name 'ScientiaMobile' could not be
> found (are you missing a using directive or an assembly reference?)
>
>
> When looking at the Build Solution logs i do see many attempts to
> locate the reference:
>
>
> 2018-03-18T00:28:22.7512305Z For SearchPath "..\assemblies".
> 2018-03-18T00:28:22.7512558Z Considered
> "..\assemblies\ScientiaMobile.WurflCloud.winmd", but it didn't exist.
> 2018-03-18T00:28:22.7512735Z Considered
> "..\assemblies\ScientiaMobile.WurflCloud.dll", but it didn't exist.
> 2018-03-18T00:28:22.7512882Z Considered
> "..\assemblies\ScientiaMobile.WurflCloud.exe", but it didn't exist.
> 2018-03-18T00:28:22.7513015Z For SearchPath
> "{HintPathFromItem}". 2018-03-18T00:28:22.7513176Z
>
> Considered "..\assemblies\ScientiaMobile.WurflCloud.dll", but it
> didn't exist. 2018-03-18T00:28:22.7513310Z For SearchPath
> "{TargetFrameworkDirectory}". 2018-03-18T00:28:22.7513474Z
>
> Considered "C:\Program Files (x86)\Reference
> Assemblies\Microsoft\Framework.NETFramework\v4.6.2\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7513695Z Considered
> "C:\Program Files (x86)\Reference
> Assemblies\Microsoft\Framework.NETFramework\v4.6.2\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7514373Z Considered
> "C:\Program Files (x86)\Reference
> Assemblies\Microsoft\Framework.NETFramework\v4.6.2\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7514564Z Considered
> "C:\Program Files (x86)\Reference
> Assemblies\Microsoft\Framework.NETFramework\v4.6.2\Facades\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7514733Z Considered
> "C:\Program Files (x86)\Reference
> Assemblies\Microsoft\Framework.NETFramework\v4.6.2\Facades\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7515106Z Considered
> "C:\Program Files (x86)\Reference
> Assemblies\Microsoft\Framework.NETFramework\v4.6.2\Facades\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7515281Z For
> SearchPath "{AssemblyFoldersFromConfig:C:\Program Files
> (x86)\Microsoft Visual
> Studio\2017\Enterprise\MSBuild\15.0\Bin\AssemblyFolders.config,v4.6.2}".
> 2018-03-18T00:28:22.7515458Z Considered "C:\Program Files
> (x86)\Microsoft Visual
> Studio\2017\Enterprise\MSBuild\15.0\Bin\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7515656Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\MSBuild\15.0\Bin\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7515824Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\MSBuild\15.0\Bin\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7516021Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\Extensions\Microsoft\SqlDb\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7516201Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\Extensions\Microsoft\SqlDb\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7516385Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\Extensions\Microsoft\SqlDb\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7516593Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\PublicAssemblies\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7516766Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\PublicAssemblies\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7516958Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\PublicAssemblies\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7517133Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.5\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7517505Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.5\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7517737Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.5\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7517914Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\VSSDK\VisualStudioIntegration\Common\Assemblies\v4.0\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7518118Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\VSSDK\VisualStudioIntegration\Common\Assemblies\v4.0\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7518298Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\VSSDK\VisualStudioIntegration\Common\Assemblies\v4.0\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7518536Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\PublicAssemblies\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7518711Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\PublicAssemblies\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7518877Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\PublicAssemblies\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7519067Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.5\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7519238Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.5\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7519438Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.5\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7519615Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.0\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7519809Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.0\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7520771Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.0\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7520990Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.0\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7521223Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.0\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7521428Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v4.0\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7521650Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v2.0\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7521847Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v2.0\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7522044Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v2.0\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7522275Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\VSSDK\VisualStudioIntegration\Common\Assemblies\v2.0\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7522567Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\VSSDK\VisualStudioIntegration\Common\Assemblies\v2.0\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7522803Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\VSSDK\VisualStudioIntegration\Common\Assemblies\v2.0\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7523005Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v2.0\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7523227Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v2.0\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7523603Z Considered
> "C:\Program Files (x86)\Microsoft Visual
> Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\v2.0\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7523956Z For
> SearchPath
> "{Registry:Software\Microsoft.NETFramework,v4.6.2,AssemblyFoldersEx}".
> 2018-03-18T00:28:22.7524102Z Considered AssemblyFoldersEx
> locations. 2018-03-18T00:28:22.7524207Z For SearchPath
> "{AssemblyFolders}". 2018-03-18T00:28:22.7524340Z Considered
> "C:\Program Files (x86)\Reference
> Assemblies\Microsoft\Framework\v3.0\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7524519Z Considered
> "C:\Program Files (x86)\Reference
> Assemblies\Microsoft\Framework\v3.0\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7524675Z Considered
> "C:\Program Files (x86)\Reference
> Assemblies\Microsoft\Framework\v3.0\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7524837Z Considered
> "C:\Program Files (x86)\Microsoft SQL
> Server\130\SDK\Assemblies\ScientiaMobile.WurflCloud.winmd", but it
> didn't exist. 2018-03-18T00:28:22.7525200Z Considered
> "C:\Program Files (x86)\Microsoft SQL
> Server\130\SDK\Assemblies\ScientiaMobile.WurflCloud.dll", but it
> didn't exist. 2018-03-18T00:28:22.7525357Z Considered
> "C:\Program Files (x86)\Microsoft SQL
> Server\130\SDK\Assemblies\ScientiaMobile.WurflCloud.exe", but it
> didn't exist. 2018-03-18T00:28:22.7525532Z Considered
> "C:\Program Files\IIS\Microsoft Web Deploy
> V3\ScientiaMobile.WurflCloud.winmd", but it didn't exist.
> 2018-03-18T00:28:22.7525682Z Considered "C:\Program
> Files\IIS\Microsoft Web Deploy V3\ScientiaMobile.WurflCloud.dll", but
> it didn't exist. 2018-03-18T00:28:22.7525829Z Considered
> "C:\Program Files\IIS\Microsoft Web Deploy
> V3\ScientiaMobile.WurflCloud.exe", but it didn't exist.
> 2018-03-18T00:28:22.7526012Z Considered "C:\Program Files
> (x86)\Reference
> Assemblies\Microsoft\Framework\v3.5\ScientiaMobile.WurflCloud.winmd",
> but it didn't exist. 2018-03-18T00:28:22.7526180Z Considered
> "C:\Program Files (x86)\Reference
> Assemblies\Microsoft\Framework\v3.5\ScientiaMobile.WurflCloud.dll",
> but it didn't exist. 2018-03-18T00:28:22.7526341Z Considered
> "C:\Program Files (x86)\Reference
> Assemblies\Microsoft\Framework\v3.5\ScientiaMobile.WurflCloud.exe",
> but it didn't exist. 2018-03-18T00:28:22.7526516Z Considered
> "C:\Program Files (x86)\WiX Toolset
> v3.11\SDK\ScientiaMobile.WurflCloud.winmd", but it didn't exist.
> 2018-03-18T00:28:22.7526664Z Considered "C:\Program Files
> (x86)\WiX Toolset v3.11\SDK\ScientiaMobile.WurflCloud.dll", but it
> didn't exist. 2018-03-18T00:28:22.7526836Z Considered
> "C:\Program Files (x86)\WiX Toolset
> v3.11\SDK\ScientiaMobile.WurflCloud.exe", but it didn't exist.
> 2018-03-18T00:28:22.7526964Z For SearchPath "{GAC}".
> 2018-03-18T00:28:22.7527080Z Considered
> "ScientiaMobile.WurflCloud", which was not found in the GAC.
> 2018-03-18T00:28:22.7527457Z For SearchPath "{RawFileName}".
> 2018-03-18T00:28:22.7527577Z Considered treating
> "ScientiaMobile.WurflCloud" as a file name, but it didn't exist.
> 2018-03-18T00:28:22.7527691Z For SearchPath "bin\Release\".
> 2018-03-18T00:28:22.7527807Z Considered
> "bin\Release\ScientiaMobile.WurflCloud.winmd", but it didn't exist.
> 2018-03-18T00:28:22.7527957Z Considered
> "bin\Release\ScientiaMobile.WurflCloud.dll", but it didn't exist.
> 2018-03-18T00:28:22.7528082Z Considered
> "bin\Release\ScientiaMobile.WurflCloud.exe", but it didn't exist.
>
>
>
So obviously the build fails.
My questions are what am i missing to get a reference pulled in correctly when dealing with 3rd party DLL in a folder outside of the project but checked in to source control. Specifically i need to know what i need to do to get this going in VSTS Build as this setup works just fine in Visual Studio. A more broad question i have is what is the best practice to using and referencing 3rd party DLLs that facilitiate the VSTS Build process finding what it needs?
Thanks for any help you can give.<issue_comment>username_1: If you use last version of Webdriver Sampler you need to select Groovy to be able to use the syntax you’re using.
Otherwise, by default it will use Javascript Rhino.
If issue persists, please show all your code.
Upvotes: 0 <issue_comment>username_2: Depending on what language you're using:
1. In case of default JavaScript you need to use JavaScript syntax like:
```
var deleteLinks = WDS.browser.findElements(org.openqa.selenium.By.xpath("${myEnumeratorDeclaredVariable}"));
```
Demo:
[](https://i.stack.imgur.com/Q2iWL.png)
2. In case of Java you don't get "vanilla" Java, you get [Beanshell](http://www.beanshell.org/) interpreter which doesn't support the [diamond operator](https://www.javaworld.com/article/2074080/core-java/jdk-7--the-diamond-operator.html) so you need to remove it like:
```
List deleteLinks = WDS.browser.findElements(org.openqa.selenium.By.xpath("${myEnumeratorDeclaredVariable}"));
```
[](https://i.stack.imgur.com/0L5iq.png)
3. Don't reference JMeter functions and/or variables in script like `"${myEnumeratorDeclaredVariable}"`, go for `WDS.vars` instead like:
```
List deleteLinks = WDS.browser.findElements(org.openqa.selenium.By.xpath(WDS.vars.get("myEnumeratorDeclaredVariable")));
```
More information: [The WebDriver Sampler: Your Top 10 Questions Answered](https://www.blazemeter.com/blog/webdriver-sampler-your-top-10-questions-answered)
Upvotes: 2 [selected_answer]
|
2018/03/15
| 692 | 2,466 |
<issue_start>username_0: I noticed some strange behaviour from chrome on a simple GET request from a Public API. I tested the same code on Mozilla and I get no errors, but Chrome keeps throwing me errors about this. Can someone help me say why my GET call isn't working on chrome?
```
import React, { Component } from 'react';
import './App.css';
import {getPokemon} from './services';
class App extends Component {
constructor(props) {
super(props);
this.state = {
name : "",
}
}
onClick(){ getPokemon().then((response) => { this.setState({ name: response.name })
})
}
render() {
return (
this.onClick(e)}> Click me
{this.state.name}
------------------
);
}
}
export default App;
```
Fetch call:
```
export const getPokemon = () => {
return fetch(`https://pokeapi.co/api/v2/pokemon/blastoise`).then((response) => {
if(response.statusText === 'OK') {
return response.json();
}
throw new Error('Network response was not ok.');
})
}
```
React & Console error:
[React Error](https://i.stack.imgur.com/O74p9.png)
Btw I already remove all my chrome files and reinstalled them.
Any tips?<issue_comment>username_1: [Response.statusText](https://developer.mozilla.org/en-US/docs/Web/API/Response/statusText) is experimental and the `Response` in Chrome has `statusText` property as an empty string. A simple change would be to update your conditional statement to:
```
if(response.status === 200)
```
Upvotes: 1 <issue_comment>username_2: This is what it returns
```
{
body: ReadableStream,
bodyUsed: false,
headers: {},
ok: true,
status: 200,
statusText: "",
type: "cors",
url: "https://pokeapi.co/api/v2/pokemon/blastoise/"
}
```
So you are probably looking for either `ok` or `status`
Upvotes: 0 <issue_comment>username_3: First of all you are using fetch method and you might be knowing that you cant check `response.status` as fetch only fails when a network error is encountered or CORS is mis-configured on the server side kind of error. For redirection and all other errors it still comes as success
Please read it : <https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch>
Right way of actual success checking for fetch method is below : Check for `response.ok`
```
.then(function(response) {
if(response.ok) {
console.log("success");
}
throw new Error('Network response was not ok.');
}
```
Upvotes: 0
|
2018/03/15
| 861 | 2,515 |
<issue_start>username_0: How I can count the total elements in a dataframe, including the subset, and put the result in the new column?
```
import pandas as pd
x = pd.Series([[1, (2,5,6)], [2, (3,4)], [3, 4], [(5,6), (7,8,9)]], \
index=range(1, len(x)+1))
df = pd.DataFrame({'A': x})
```
I tried with the following code but it gives 2 in each of row:
```
df['Length'] = df['A'].apply(len)
print(df)
A Length
1 [1, (2, 5, 6)] 2
2 [2, (3, 4)] 2
3 [3, 4] 2
4 [(5, 6), (7, 8, 9)] 2
```
However, what I want to get is as follow:
```
A Length
1 [1, (2, 5, 6)] 4
2 [2, (3, 4)] 3
3 [3, 4] 2
4 [(5, 6), (7, 8, 9)] 5
```
thanks<issue_comment>username_1: use `itertools`
```
df['Length'] = df['A'].apply(lambda x: len(list(itertools.chain(*x))))
```
Upvotes: 0 <issue_comment>username_2: You could try using this function, it's recursive but it works:
```
def recursive_len(item):
try:
iter(item)
return sum(recursive_len(subitem) for subitem in item)
except TypeError:
return 1
```
Then just call the apply function this way:
```
df['Length'] = df['A'].apply(recursive_len)
```
Upvotes: 0 <issue_comment>username_3: Given:
```
import pandas as pd
x = pd.Series([[1, (2,5,6)], [2, (3,4)], [3, 4], [(5,6), (7,8,9)]])
df = pd.DataFrame({'A': x})
```
You can write a recursive generator that will yield `1` for each nested element that is not iterable. Something along these lines:
```
import collections
def glen(LoS):
def iselement(e):
return not(isinstance(e, collections.Iterable) and not isinstance(e, str))
for el in LoS:
if iselement(el):
yield 1
else:
for sub in glen(el): yield sub
df['Length'] = df['A'].apply(lambda e: sum(glen(e)))
```
Yielding:
```
>>> df
A Length
0 [1, (2, 5, 6)] 4
1 [2, (3, 4)] 3
2 [3, 4] 2
3 [(5, 6), (7, 8, 9)] 5
```
That will work in Python 2 or 3. With Python 3.3 or later, you can use `yield from` to replace the loop:
```
def glen(LoS):
def iselement(e):
return not(isinstance(e, collections.Iterable) and not isinstance(e, str))
for el in LoS:
if iselement(el):
yield 1
else:
yield from glen(el)
```
Upvotes: 2 [selected_answer]
|
2018/03/15
| 575 | 1,888 |
<issue_start>username_0: ```js
var isTopper = false
var marksobtained = window.prompt('please enter the marks obtained ')
if (marksobtained == undefined || marksobtained == null || marksobtained == '') {
alert('invalid')
} else if (marksobtained < 0 || marksobtained > 100) {
alert('enter between 0 to 100')
} else {
marksobtained = Number(marksobtained)
var totalmarks = 100
var percentage = (marksobtained / totalmarks) * 100
if (percentage > 90) {
isTopper = true
} else {
isTopper = false
}
alert(isTopper)
}
```<issue_comment>username_1: I dont know where is your problem. Please provide some details. `alert('invalid')` is working correctly and being called when value is empty string.
Instead of
```
if (marksobtained == undefined || marksobtained == null || marksobtained == '')
```
You can simply use
```
if (!marksobtained) { ...
```
Upvotes: -1 <issue_comment>username_2: I don't why your code is not working for you(as it is working for us), I just made few modifications in your code like replace `if (marksobtained == undefined || marksobtained == null || marksobtained == '')` this with this `if (!marksobtained)` and
As you are asking only one subject marks you don't have to calculate percentage you can directly tell on the basis of marks whether he/she is topper or not(I am saying this only because you are asking only one subject marks and that marks is in between 1 to 100)
```js
var isTopper = false
var marksobtained = window.prompt('please enter the marks obtained ')
if (!marksobtained || isNaN(marksobtained)) {
alert('invalid')
} else if (marksobtained < 0 || marksobtained > 100) {
alert('enter between 0 to 100')
} else {
marksobtained = Number(marksobtained);
if (marksobtained > 90) {
isTopper = true
} else {
isTopper = false
}
alert(isTopper)
}
```
Upvotes: 0
|
2018/03/15
| 343 | 1,420 |
<issue_start>username_0: Is there a way to achieve time traveling in Clojure, for example if I have a vector (which is internally a tree implemented as a persistent data structure) is there a way to achieve time traveling and get preivous versions of that vector? Kind of what Datomic does at the database level, since Clojure and Datomic share many concepts including the facts being immutable implemented as persistent data strcutures, technically the older version of the vector is still there. So I was wondering if time traveling and getting previous versions is possible in plain Clojure similarly to what it is done in Datomic at the database level<issue_comment>username_1: Yes, but you need to keep a reference to it in order to access it, and in order to prevent it from being garbage collected. Clojurists often implement undo/redo in this way; all you need to do is maintain a list of historical states of your data, and then you can trivially step backward.
<NAME> has described this approach [here](https://swannodette.github.io/2013/12/31/time-travel), and you can find a more detailed example and explanation [here](https://macwright.org/2016/02/07/undo-redo-clojurescript.html).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Datomic is plain Clojure. You can use Datomic as a Clojure library either with an in-memory database (for version tracking) or with no database at all.
Upvotes: 0
|
2018/03/15
| 281 | 1,057 |
<issue_start>username_0: I have this Table with one row `Transaction Date` the first row is the `checkIn` and the second one is the `Checkout` if we organized the result by date asc.
I need to pass the second row value to another column named `Checkout`. This table has at least 1000 records<issue_comment>username_1: Yes, but you need to keep a reference to it in order to access it, and in order to prevent it from being garbage collected. Clojurists often implement undo/redo in this way; all you need to do is maintain a list of historical states of your data, and then you can trivially step backward.
<NAME> has described this approach [here](https://swannodette.github.io/2013/12/31/time-travel), and you can find a more detailed example and explanation [here](https://macwright.org/2016/02/07/undo-redo-clojurescript.html).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Datomic is plain Clojure. You can use Datomic as a Clojure library either with an in-memory database (for version tracking) or with no database at all.
Upvotes: 0
|
2018/03/15
| 794 | 3,012 |
<issue_start>username_0: In typescript if I have the following class:
```
class Hello{
public world: string;
}
```
Why is it when I type assert parsed json as `Hello`, `instanceof` returns false?
```
var json: string = `{"world": "world"}`;
var assertedJson: Hello = JSON.parse(json) as Hello;
var hello: Hello = new Hello();
console.log(assertedJson instanceof Hello); <!-- returns false
console.log(hello instanceof Hello); <!-- returns true (as expected)
</code>
```<issue_comment>username_1: Type assertion is for compiler at compile-time only, it does not have any effect at runtime. It's developer's responsibility to make sure that compile-time type assertion matches runtime behavior.
`instanceof` is run-time check, basically checking that the object was created using particular class constructor. It's not of much use for JSON objects, because these are created as instances of built-in `Object` type.
The answer from TypeScript developers about supporting run-time type checking consistent with type system is:
<https://github.com/Microsoft/TypeScript/issues/2444#issuecomment-85097544>
>
> This has been suggested and discussed many times. We're always trying
> to avoid pushing the type system into the runtime and the number of
> types this actually works for (pretty much just classes and
> primitives) isn't large enough to justify the complexity versus the
> use cases it enables.
>
>
>
Upvotes: 1 <issue_comment>username_2: The reason is that `instanceof`is transpiled 'as is' to JavaScript. If you transpile your code you get:
```js
var Hello = /** @class */ (function () {
function Hello() {
}
return Hello;
}());
var json = "{\"world\": \"world\"}";
var assertedJson = JSON.parse(json);
var hello = new Hello();
console.log(assertedJson instanceof Hello);
console.log(hello instanceof Hello);
```
`a instanceof b` checks that the prototype of b is in the prototype chain of a. Your `assertedJson` does not fulfills this requisite.
And the `as` operator means nothing when transpiled, is just an annotation
Upvotes: 3 [selected_answer]<issue_comment>username_3: As others have said, the type cast you do with `as` has no effect on the runtime. But, as a workaround, you can manually set the prototype of the parsed object so that it will be an instance of `Hello`, if you so desire:
```
class Hello {
public world: string;
public sayHello() {
console.log("Hello", this.world);
}
}
var json: string = `{"world": "world"}`;
var assertedJson: Hello = JSON.parse(json) as Hello;
var hello: Hello = new Hello();
console.log(assertedJson instanceof Hello); // <!-- returns false
console.log(hello instanceof Hello); // <!-- returns true (as expected)
// Manual fix:
Object.setPrototypeOf(assertedJson, Hello.prototype);
console.log(assertedJson instanceof Hello); // <!-- returns true
// And doing so will then allow you to access the Hello methods too:
assertedJson.sayHello() // Prints "Hello world"
</code>
```
Upvotes: 2
|
2018/03/15
| 791 | 2,726 |
<issue_start>username_0: I currently have Elasticsearch version 6.2.2 and Apache Nifi version 1.5.0 running on the same machine. I'm trying to follow the Nifi example located: <https://community.hortonworks.com/articles/52856/stream-data-into-hive-like-a-king-using-nifi.html> except instead of storing to Hive, I want to store to Elasticsearch.
Initially I tried using the PutElasticsearch5 processor but I was getting the following error on Elasticsearch:
```
Received message from unsupported version: [5.0.0] minimal compatible version is: [5.6.0]
```
When I tried Googling this error message, it seemed like the consensus was to use the PutElasticsearchHttp processor. My Nifi looks like:
[](https://i.stack.imgur.com/kTGjF.png)
And the configuration for the PutElasticsearchHttp processor:
[](https://i.stack.imgur.com/xXUp1.png)
When the flowfile gets to the PutElasticsearchHttp processor, the following error shows up: [](https://i.stack.imgur.com/mwZAs.png)
```
PutElasticSearchHttp failed to insert StandardFlowFileRecord into Elasticsearch due to , transferring to failure.
```
It seems like the reason is blank/null. There also wasn't anything in the Elasticsearch log.
After the ConvertAvroToJson, the data is a JSON array with all of the entries on a single line. Here's a sample value:
```
{"City": "Athens",
"Edition": 1896,
"Sport": "Aquatics",
"sub_sport": "Swimming",
"Athlete": "HAJOS, Alfred",
"country": "HUN",
"Gender": "Men",
"Event": "100m freestyle",
"Event_gender": "M",
"Medal": "Gold"}
```
Any ideas on how to debug/solve this problem? Do I need to create anything in Elasticsearch first? Is my configuration correct?<issue_comment>username_1: The index name cannot contain the / character. Try with a valid index name: e.g. sports.
Upvotes: 0 <issue_comment>username_2: I was able to figure it out. After the ConvertAvroToJSON, the flow file was a single line that contained a JSON Array of JSON indices. Since I wanted to store the individual indices I needed a SplitJSON processor. Now my Nifi looks like this:
[](https://i.stack.imgur.com/pCalX.png)
The configuration of the SplitJson looks like this:
[](https://i.stack.imgur.com/jszGT.png)
Upvotes: 4 [selected_answer]<issue_comment>username_3: I had a similar flow, wherein changing the type to `_doc` did the trick after including `splitTojSON`.
Upvotes: 0
|
2018/03/15
| 845 | 2,541 |
<issue_start>username_0: **The problem**
I have the following list in Python 3.6
```
Piko = {}
Piko['Name']='Luke'
```
I am trying to write a function that give the value of the element if it exist and is set and give `None` otherwise.
For example:
* INPUT: `isset(Piko['Name'])` OUTPUT: `Luke`
* INPUT: `isset(Piko['Surname'])` OUTPUT: `None`
**What I have tried**
1st try; based on my know how:
```
def isset1(x):
try:
x
except KeyError:
print(None)
else:
print(x)
```
2nd try; based on [this](https://stackoverflow.com/a/11786240/1008588) answer:
```
def isset2(x):
try:
t=x
except IndexError:
print(None)
```
3rd try; based on [this](https://stackoverflow.com/a/16154204/1008588) answer:
```
def isset3(x):
try:
x
except Exception:
print(None)
else:
print(x)
```
Any one of the previous gives me `KeyError: 'Surname`' error and does not output `None` as I wanted. Can anybody help me explaining how could I manage correctly the `KeyError`?<issue_comment>username_1: ```
Piko.get('Surname')
Piko.get('Surname', None)
```
are identical and return `None` since "Surname" is not in your dictionary.
For future reference you can quickly discover this from the Python shell (eg ipython) by typing:
>
> In[4]: help(Piku.get)
>
>
>
Which produces:
>
> Help on built-in function get:
>
>
> get(...)
>
>
> D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None.
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: The exception is happening before it even gets into your `isset` function. When you do this:
```
isset(Piko['Name'])
```
… it's basically the same as doing this:
```
_tmp = Piko['Name']
isset(_tmp)
```
No matter what code you put inside `isset`, it's not going to help, because that function never gets called. The only place you can put the exception handling is one level up, in the function that calls `isset`.
Or, alternatively, you can not try to lookup `dict[key]` to pass into `isset`, and pass the dict and the key as separate parameters:
```
def isset(d, key):
try:
print(d[key])
except KeyError:
print(None)
```
But at this point, you're just duplicating `dict.get` in a clumsier way. You can do this:
```
def isset(d, key):
print(d.get(key, None))
```
… or just scrap `isset` and do this:
```
print(Piko.get('Name', None))
```
Or, since `None` is the default if you don't specify anything:
```
print(Piko.get('Name'))
```
Upvotes: 2
|
2018/03/15
| 1,332 | 4,559 |
<issue_start>username_0: In my app I have a time in milliseconds (between 0 and 60000) divided by a certain value (In the current example it's 60000) Then, I times that by 100.
However, when printing the answer into the log, it always returns 0.
This is my code:
```
countDownTimer = new CountDownTimer(activityDuration, 1000) {
@Override
public void onTick(long timeLeftInMillis) {
int minutes = (int) (timeLeftInMillis / 1000) / 60;
int seconds = (int) (timeLeftInMillis / 1000) % 60;
String timeLeftFormatted = String.format(Locale.getDefault(), "%02d:%02d", minutes, seconds);
countDownTimerTextView.setText(timeLeftFormatted);
System.out.println("L: " + timeLeftInMillis); // Returns correctly. Values between 0 and 60000
System.out.println("Activity Duration: " + activityDuration); // Returns correctly. 60000
System.out.println("Progress: " + (timeLeftInMillis / activityDuration) * 100); // Returns 0
}
@Override
public void onFinish() {
}
}.start();
```<issue_comment>username_1: `timeLeftInMillis / activityDuration` is integer division and is always truncated to the nearest integer.
You need to cast to a float or double before doing the divsion i.e.
`((double)timeLeftInMillis / activityDuration) * 100)`
Upvotes: 1 [selected_answer]<issue_comment>username_2: [The answer by username_1](https://stackoverflow.com/a/49309411/5772882) gives you a correct solution. I should like to contribute the modern version. Don’t use primitive numbers for durations, use the `Duration` class of java.time (the modern Java date and time API). This means, first declare `activityDuration` a `Duration`, for example:
```
final Duration activityDuration = Duration.ofMinutes(1);
```
IMHO this is already much clearer to read as 1 minute than your value of 60000.
```
Duration timeLeft = Duration.ofMillis(timeLeftInMillis);
int minutes = (int) timeLeft.toMinutes();
int seconds = timeLeft.toSecondsPart();
String timeLeftFormatted = String.format(Locale.getDefault(), "%02d:%02d", minutes, seconds);
countDownTimerTextView.setText(timeLeftFormatted);
System.out.println("L: " + timeLeftInMillis); // Returns correctly. Values between 0 and 60000
System.out.println("Activity Duration: " + activityDuration); // Returns correctly. 60000
System.out.println("Progress: " + timeLeft.multipliedBy(100).dividedBy(activityDuration));
```
A possible output is:
```
L: 25555
Activity Duration: PT1M
Progress: 42
```
In this case `00:25` was filled into your text view. `PT1M` comes from `Duration.toString` and means 1 minute (it‘s ISO 8601 format, it goes like this, you will learn to read it easily if you need to).
The `Duration.dividedBy(Duration)` method was introduced in Java 9. For earlier Java versions and Android instead use:
```
System.out.println(
"Progress: " + timeLeft.multipliedBy(100).toMillis() / activityDuration.toMillis());
```
The result is the same.
Question: Can I use java.time on Android?
-----------------------------------------
Yes, `java.time` works nicely on older and newer Android (only not the modest Java 9 additions). It just requires at least Java **6**.
* In Java 8 and later and on new Android devices (API level 26 and above, I am told) the modern API comes built-in.
* In Java 6 and 7 get the ThreeTen Backport, the backport of the new classes (ThreeTen for JSR 310; see the links at the bottom).
* On (older) Android use the Android edition of ThreeTen Backport. It’s called ThreeTenABP. And make sure you import the date and time classes from `org.threeten.bp` with subpackages, for example `import org.threeten.bp.Duration;`.
Links
-----
* [Oracle tutorial: Date Time](https://docs.oracle.com/javase/tutorial/datetime/) explaining how to use `java.time`.
* [Documentation of the `Duration` class](https://docs.oracle.com/javase/9/docs/api/java/time/Duration.html)
* [Java Specification Request (JSR) 310](https://jcp.org/en/jsr/detail?id=310), where `java.time` was first described.
* [ThreeTen Backport project](http://www.threeten.org/threetenbp/), the backport of `java.time`to Java 6 and 7 (ThreeTen for JSR-310).
* [ThreeTenABP](https://github.com/JakeWharton/ThreeTenABP), Android edition of ThreeTen Backport
* [Question: How to use ThreeTenABP in Android Project](https://stackoverflow.com/questions/38922754/how-to-use-threetenabp-in-android-project), with a very thorough explanation.
Upvotes: 1
|
2018/03/15
| 930 | 2,911 |
<issue_start>username_0: I need to make sure that all the files which I find in a parent directory have a particular pattern or not.
Example:
```
./a/b/status: *foo*foo
./b/c/status: bar*bar
./c/d/status: foo
```
The command should return false as file 2 does not have a foo.
I am trying below but dont have clue on how to achieve this in single command.
```
find . -name "status" | xargs grep -c "foo"
```<issue_comment>username_1: `-c` option counts the number of times the pattern is found. You wouldn't need `find`, rather use `-r` and `--include` option for grep.
```
$ grep -r -c foo --include=status
```
`-r` does a recursive search for patterh `foo` for files that match `status`.
Example. I have four files in three directories. Each have a single line;
```
$ cat a/1.txt b/1.txt b/2.txt c/1.txt
foobar
bar
foo
bazfoobar
```
With the above `grep`, you would get something like this,
```
$ grep -ir -c foo --include=1.txt
a/1.txt:1
b/1.txt:0
c/1.txt:1
```
Upvotes: 1 <issue_comment>username_2: You can count the number of files that do not contain "foo", if number> 0 it means that there is at least one file that does not contain "foo" :
```
find . -type f -name "status" | xargs grep -c "foo" | grep ':0$' | wc -l
```
or
```
find . -type f -name "status" | xargs grep -c "foo" | grep -c ':0$'
```
or
optimized using username_1 answer (thanks) :
```
grep -ir -c "foo" --include=status | grep -c ':0$'
```
if all files in the tree are named "status", you can use the more simple commande line :
```
grep -ir -c "foo" | grep -c ':0$'
```
with check
```
r=`grep -ir -c foo | grep -c ':0$'`
if [ "$r" != "0" ]; then
echo "false"
fi
```
Upvotes: 1 <issue_comment>username_3: If you want `find` to output a list of files that can be read by `xargs`, then you need to use:
```
find . -name "status" -print0 | xargs -0 grep foo` to avoid filenames with special characters (like spaces and newlines and tabs) in them.
```
But I'd aim for something like this:
```
find . -name "status" -exec grep "foo" {} \+
```
The `\+` to terminate the `-exec` causes `find` to append all the files it finds onto a single instance of the `grep` command. This is much more efficient than running `grep` once for each file found, which it would do if you used `\;`.
And the default behaviour of `grep` will be to show the filename and match, as you've shown in your question. You can alter this behaviour with options like:
* `-h` ... don't show the filename
* `-l` ... show only the files that match, without the matching text,
* `-L` ... show only the files that DO NOT match - i.e. the ones without the pattern.
This last one sounds like what you're actually looking for.
Upvotes: 0 <issue_comment>username_4: ```
find . -name 'status' -exec grep -L 'foo' {} + | awk 'NF{exit 1}'
```
The exit status of the above will be 0 if all files contain 'foo' and 1 otherwise.
Upvotes: 0
|
2018/03/15
| 940 | 3,053 |
<issue_start>username_0: it is probably a duplicate in some way because many people are having the same error. I read through similar questions but I still do not understand why this script throws that error.
```js
$("#clientList").change(function (event) {
$.ajax({
url: "/LoadClients",
data: { id = $("#clientList").val() },
type: "GET",
dataType: "html",
success: function (data, textStatus, XMLHttpRequest) { }
});
});
```
I am still pretty new to the whole JavaScript thing so thanks in advance for pointing me towards the light<issue_comment>username_1: `-c` option counts the number of times the pattern is found. You wouldn't need `find`, rather use `-r` and `--include` option for grep.
```
$ grep -r -c foo --include=status
```
`-r` does a recursive search for patterh `foo` for files that match `status`.
Example. I have four files in three directories. Each have a single line;
```
$ cat a/1.txt b/1.txt b/2.txt c/1.txt
foobar
bar
foo
bazfoobar
```
With the above `grep`, you would get something like this,
```
$ grep -ir -c foo --include=1.txt
a/1.txt:1
b/1.txt:0
c/1.txt:1
```
Upvotes: 1 <issue_comment>username_2: You can count the number of files that do not contain "foo", if number> 0 it means that there is at least one file that does not contain "foo" :
```
find . -type f -name "status" | xargs grep -c "foo" | grep ':0$' | wc -l
```
or
```
find . -type f -name "status" | xargs grep -c "foo" | grep -c ':0$'
```
or
optimized using username_1 answer (thanks) :
```
grep -ir -c "foo" --include=status | grep -c ':0$'
```
if all files in the tree are named "status", you can use the more simple commande line :
```
grep -ir -c "foo" | grep -c ':0$'
```
with check
```
r=`grep -ir -c foo | grep -c ':0$'`
if [ "$r" != "0" ]; then
echo "false"
fi
```
Upvotes: 1 <issue_comment>username_3: If you want `find` to output a list of files that can be read by `xargs`, then you need to use:
```
find . -name "status" -print0 | xargs -0 grep foo` to avoid filenames with special characters (like spaces and newlines and tabs) in them.
```
But I'd aim for something like this:
```
find . -name "status" -exec grep "foo" {} \+
```
The `\+` to terminate the `-exec` causes `find` to append all the files it finds onto a single instance of the `grep` command. This is much more efficient than running `grep` once for each file found, which it would do if you used `\;`.
And the default behaviour of `grep` will be to show the filename and match, as you've shown in your question. You can alter this behaviour with options like:
* `-h` ... don't show the filename
* `-l` ... show only the files that match, without the matching text,
* `-L` ... show only the files that DO NOT match - i.e. the ones without the pattern.
This last one sounds like what you're actually looking for.
Upvotes: 0 <issue_comment>username_4: ```
find . -name 'status' -exec grep -L 'foo' {} + | awk 'NF{exit 1}'
```
The exit status of the above will be 0 if all files contain 'foo' and 1 otherwise.
Upvotes: 0
|
2018/03/15
| 815 | 2,935 |
<issue_start>username_0: I wish to use `System.Runtime.Caching.MemoryCache` but I'm wondering how to use it with generics.
In the following example, I would be in trouble if `T` is a value type.
```
public T GetItem(string key, Func loadItemFromDb)
{
var cachedItem = (T) memoryCache.Get(key);
if(cachedItem != null)
return cachedItem;
// call db
//put result in cache
// return result
}
```
`MemoryCache.Get(string key)` returns `null` if the cache entry identified by `key` doesn't exist and it would raise `NullReferenceException` as it tries to do `(T)null` (with `T` a value type)
How could I get similar behaviour for every `T` ?
EDIT : I removed `where T : class` as this constraint prevents the case I'm describing.
EDIT 2 : I add some code to provide intent<issue_comment>username_1: The problem is that the cast can fail if the value is null. So **don't cast if the value is null**.
```
public T GetItem(string key, Func loadItemFromDb)
{
object cachedItem = memoryCache.Get(key);
if (cachedItem is T)
return (T)cachedItem;
T item = loadItemFromDb();
memoryCache.Add(key, item, somePolicy);
return item;
}
```
There's no problem with value types here; if T is a value type, and `cachedItem` is not a boxed `T`, then we never cast `cachedItem` to `T`.
FYI in C# 7 you can tighten that up a bit to:
```
if (cachedItem is T t)
return t;
```
and now there's no cast at all!
Upvotes: 4 [selected_answer]<issue_comment>username_2: Based on @ericlippart's answer, this looked like a good method to implement as an extension method and there are a couple of issues not addressed in his answer:
1. You need a cache policy to insert the result of loadItemFromDb function in the question which isn't passed into the function.
2. The Add method in Eric's answer doesn't exist (with that signature).
As `MemoryCache` is just an implementation of the abstract class `ObjectCache` and it does not use any `MemoryCache` specific functionallity, I based this on `ObjectCache`.
A full implementation of the Generic version of the `AddOrGetExisting` method on `ObjectCache`:
```
public static T AddOrGetExisting(ObjectCache cache, string key, Func<(T item, CacheItemPolicy policy)> addFunc)
{
object cachedItem = cache.Get(key);
if (cachedItem is T t)
return t;
(T item, CacheItemPolicy policy) = addFunc();
cache.Add(key, item, policy);
return item;
}
```
Note that this uses the C# 7 enhancement mentioned and also `System.ValueTuple` which is available from NuGet if using net461 or higher or is built into netstandard2.0
A full extension class, along with generic versions of Get and TryGetValue, as well as a number of other overloads of AddOrGetExisting is available in my GitHub repository [CZEMacLeod/ObjectCacheExtensions](https://github.com/CZEMacLeod/ObjectCacheExtensions/blob/master/src/ObjectCacheExtensions/ObjectCacheExtensions.cs "ObjectCacheExtensions.cs")
Upvotes: 0
|
2018/03/15
| 1,011 | 3,890 |
<issue_start>username_0: I am looking for a way to obtain my property names in a SSIS data flow task Script Component. I have been searching high and low only finding [this](https://stackoverflow.com/questions/22746994/getting-column-name-from-pipelinebuffer-in-script-component-in-ssis-2012). I have been trying to get this code to work, but I am too novice to understand what is happening here and I don't feel it is explained very well(no offense).
The source before this component is using a SQL query joining two tables. Inside the component, I would like to compare column to column. Then call an update method I created to use SqlConnection to perform the update.
```
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
if (Row.TableALastName != Row.TableBLastName)
// Call the update method if the last name did not match.
this.UpdateRecord("TableBLastName", Row.TableALastName.ToString(), Row.TableAAssociateId.ToString());
}
}
private void UpdateRecord(string columnName, string change, string associateId)
{
SqlConnection sqlConnection;
sqlConnection = new SqlConnection(this.Variables.Connection);
string updateQuery = "UPDATE [SomeDataBase].[dbo].[TableB] SET " + columnName + " = " + change + " WHERE [Associate_ID] = " + associateId;
using (SqlCommand cmd2 = new SqlCommand(updateQuery, sqlConnection))
{
sqlConnection.Open();
cmd2.ExecuteNonQuery();
sqlConnection.Close();
}
}
```
I would like to somehow get the PropertyName of Row.TableBLastName instead of having to hard code "TableBLastName" for each test I am doing, which will be a lot.
The problem is that the input buffer class does not have Property.GetName() This also means I can't add a method to the class to get the property names, as it is regenerated each run.<issue_comment>username_1: You are already in SSIS so I will propose using that (no matter how quick I usually jump to C# to solve problems)
This is a classic conditional split scenario:
Do your test then run the results into a SQL Update statement.
Upvotes: 0 <issue_comment>username_2: ```
public Input0_ProcessInputRow(Input0Buffer Row)
{
Dictionary> list = new Dictionary>();
List propertyList = new List();
Type myType = typeof(Input0Buffer);
PropertyInfo[] allPropInfo = myType.GetProperties();
List SqlPropInfo = allPropInfo.Where(x => !x.Name.Contains("AM\_")).ToList();
// Loop through all the Sql Property Info so those without AM\_
for (int i = 0; i < SqlPropInfo.Count(); i++)
{
List group = new List();
foreach (var propInfo in allPropInfo)
{
if (propInfo.Name.Contains(SqlPropInfo[i].Name))
{
// Group the values based on the property
// ex. All last names are grouped.
group.Add(propInfo.GetValue(Row, null).ToString());
}
}
// The Key is the Sql's Property Name.
list.Add(SqlPropInfo[i].Name, group);
}
foreach (var item in list)
{
// Do a check if there are two values in both SQL and Oracle.
if (item.Value.Count >= 2)
{
if (item.Value.Count() != item.Value.Distinct().Count())
{
// Duplicates exist do nothing.
}
else
{
// The values are different so update the value[0]. which is the SQL Value.
UpdateRecord(item.Key, item.Value[0], Row.AssociateId);
}
}
}
}
```
I separated the values from the two tables so there are two lists values from TableA and TableB. You can prefix the values from TableA with "AM\_" or something distinct so you can use reflection to to get the properties with and without the prefix and find out which values belong to which table. Then I just loop through the properties and group the values with the properties from the target value (so those without the prefix "AM\_") I then loop through the grouped list and compare the two values and if it's different, update TableA with the TableB values to match them
Upvotes: 2 [selected_answer]
|
2018/03/15
| 1,775 | 6,228 |
<issue_start>username_0: I hope some kind person can help me out here.
I want to sort nested comments in wordpress by likes. I have only found one plugin that does this and it doesn't meet my needs, so I'm attempting to write my own. Most of it is actually pretty straightforward, but the sql is eluding me (not really my strength).
I need an SQL Query to sort comments by likes, with replies immediately following their parent, and replies to each parent also sorted by likes. Top level comments and replies are differentiated by 'layer'. There is only one level of replies. My table looks like this:
ID (Int)
Comment\_Name (VarChar)
Layer (Int)... 1 for top level comment, 2 for reply
ID\_of\_Parent\_Comment (Int)... replys must be grouped under top level comment with this id
Likes (Int)
For example, if top level comments are represented by numbers and replies by letters, it would look something like this:
1, 2, 3, 3a, 3b, 4, 5, 5a... etc
Anyone have any ideas?<issue_comment>username_1: Looks like this should be close:
```
select
post.ID,
post.likes as postLikes,
reply.ID,
reply.likes as replyLikes
from MyTable post
left join MyTable reply
on post.ID = reply.ID_of_Parent_Comment
where post.ID_of_Parent_Comment is null
order by post.likes desc, reply.likes desc
;
```
It will give you the parent ID's sorted by parent likes and the related child ID's for each parent (if any) sorted by most liked child
Upvotes: 0 <issue_comment>username_2: It turns out that the other answer did not quite work out after all. It sure looked right. Replies were grouped nicely beneath the appropriate parent comment, everthing was sorted by likes. But if you look closely, the sqlfiddle test returned 14 records where there are only 12 available.
After spending way too much time fiddling with it on my site, I couldn't resolve it any further. One group or the other (top level comments or replies) were always either left off or duplicated.
I finally gave up, assuming that it could not be done with SQL, so I went back to something I was familiar with: php. Here is my solution. Hopefully someone will find it useful. If nothing else, it was a fun project.
myComments.php
```
php
global $wpdb;
$post_ID = get_the_ID();
// Get Comment Table
$sql =
" SELECT *"
." FROM wp_comments"
." WHERE comment_post_ID = " . $post_ID // only retrieve comments for this post
." AND comment_parent = '0'" // only retrieve top level comments
." ORDER BY likes DESC"
.";";
$tlc = $wpdb-get_results($sql, ARRAY_A); // Retrieve all records into $tlc
// this should never be
// large enough to be a problem.
$commentCount = count( $tlc ); // Number of TopLevelComments
// Adjust Comments
for ( $i = 0; $i <= $commentCount-1; $i++ ) {
$tlc[$i]['layer'] = 0; // Layer 0 indicates top level comment
$tlc[$i]['index'] = $i; // index is used to group parents
// with children
}
// Get Reply Table
$sql =
" SELECT *"
." FROM wp_comments"
." WHERE comment_post_ID = " . $post_ID
." AND comment_parent > '0'" // only retrieve replies
." ORDER BY likes DESC"
.";";
$replies = $wpdb->get_results($sql, ARRAY_A);
$replyCount = count( $replies );
// Adjust Replies
for ( $i = 0; $i <= $commentCount-1; $i++ ) {
$replies[$i]['layer'] = 1; // Layer 1 indicates replies
}
// Set child index to that of parent
// then add child record to parent array
for ( $i = 0; $i <= $replyCount-1; $i++ ) {
$x = $replies[$i]['comment_parent']; // Get ID of parent
for ( $j = 0; $j <= $commentCount-1; $j++ ) {
if ( $tlc[$j]['comment_ID'] == $x ) { // If parent found
$value = $tlc[$j]['index']; // Get parent's index
$replies[$i]['index'] = $value; // Give child parent's index
array_push ( $tlc, $replies[$i]);
}
}
}
// Sort comments
// Note that $tlc was sorted by select
// and index was assigned while in that order
$tlc = array_orderby($tlc, 'index', SORT_ASC,
'layer', SORT_ASC,
'likes', SORT_DESC);
// Display comments
$commentCount = count($tlc);
if ( $commentCount ) {
echo "";
// Used to determine if we have opened a second for nested comments
// and ensure we close it before we are done.
$inReplyList = false;
// We don't want to close the before we've opened it.
$firstComment = true;
for ( $i = 0; $i <= $commentCount-1; $i++ ) {
$myComment = $tlc[$i];
// Set $depth (needed by reply-link on myCommentTemplate page)
$depth = 0;
$comment\_ID = $myComment['comment\_ID'];
while( $comment\_ID > 0 ) {
$tempComment = get\_comment( $comment\_ID );
$comment\_ID = $tempComment->comment\_parent;
$depth++;
}
// Treat each group of nested comments as a separate ordered group
if ( $depth == 2 ) {
if ( ! $inReplyList ) {
echo "";
$inReplyList = true;
}
} else {
if ( ! $firstComment ) {
if ( $inReplyList ) {
echo "
";
$inReplyList = false;
}
}
}
$firstComment = false;
// Display each comment
include ('myCommentTemplate.php');
}
if ( $inReplyList ) {
echo "
";
}
echo "
";
} else {
echo 'No comments found.';
}
// Where comments are made
include('myCommentForm.php');
$wpdb->flush();
?>
```
function array\_orderby() (located in functions.php)
```
/* SORT PHP ARRAYS OF RECORDS */
// PHP function 'array_multisort' requires columns //
// This function handles the conversion from row to col and back again //
// Example:
// $sorted = array_orderby($data, 'volume', SORT_DESC, 'edition', SORT_ASC);
function array_orderby()
{
$args = func_get_args();
$data = array_shift($args);
foreach ($args as $n => $field) {
if (is_string($field)) {
$tmp = array();
foreach ($data as $key => $row)
$tmp[$key] = $row[$field];
$args[$n] = $tmp;
}
}
$args[] = &$data;
call_user_func_array('array_multisort', $args);
return array_pop($args);
}
```
Upvotes: 2 [selected_answer]
|
2018/03/15
| 758 | 2,705 |
<issue_start>username_0: So I have a function like:
```
def my_code(arg1, *args):
....
```
And I want this function to only be able to take either 2 or 3 arguments (that means \*args can only be 1 or 2 arguments). How do I throw an error message if the number of arguments is wrong? If I use a try/exception, is there a specific exception type for this?<issue_comment>username_1: You can get the length of `args` with `len` as you would for any tuple.
```
def my_code(arg1, *args):
if not 0 < len(args) < 3:
raise TypeError('my_code() takes either 2 or 3 arguments ({} given)'
.format(len(args) + 1))
my_code(1) # TypeError: my_code() takes either 2 or 3 arguments (1 given)
my_code(1, 2) # pass
my_code(1, 2, 3) # pass
my_code(1, 2, 3, 4) # TypeError: my_code() takes either 2 or 3 arguments (4 given)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your test is:
```
if len(args) not in (1,2):
```
though there are of course other ways to phrase that.
As for the exception, if you call a built-in function with the wrong number of arguments, you get a `TypeError`. If your application doesn't justify creating your own subclass of `Exception`, then that is probably the way to go.
Upvotes: 0 <issue_comment>username_3: ```py
def my_code(*args):
if len(args) > 2:
raise TypeError
else:
# code for your function
pass
```
Basically `*args` is a tuple, and if you want a maximum number of arguments you can raise a `TypeError`.
Upvotes: 0 <issue_comment>username_4: I'm facing a similar problem. I think the `ValueError` is better suited for this.
<https://docs.python.org/3/library/exceptions.html#ValueError>
>
> exception `ValueError`: Raised when an operation or function receives an
> argument that has the right type but an inappropriate value, and the
> situation is not described by a more precise exception such as
> IndexError.
>
>
>
I have a function that can receive two arguments, but should receive only either one of the two, but not both. If both are set, or none of them are set, it's a problem. I use the `ValueError` exception for this.
Example code:
```
def parse_article(self, url: string = None, file: string = None) -> Article:
if url == None and file == None:
raise ValueError("Function was called without any arguments. Please set either url or file, but not both.")
else:
if url != None and file != None: raise ValueError(
"Both url and file were given. Please give either url or file, but not both.")
# Rest of the function.
# Parse the article at the url or in the file, then return it.
```
Upvotes: 0
|
2018/03/15
| 768 | 2,613 |
<issue_start>username_0: This may sound like an odd question, and I have a feeling that the short answer is 'No'.
However, is there any way a variable could take more than one value, based on Boolean operators? For example:
```
//Current implementation
string Variable1 = "A";
string Variable2 = "B";
string Variable3 = "C";
//Sought after implementation
string Variable = "":
Variable = "A" || Variable = "B" || Variable = "C";
```
This doesn't look like it could be feasible, especially since Boolean operators can't be applied to string types, because, well... They're not Boolean.<issue_comment>username_1: Define an enum with the [Flags] attribute.
Reference: [What does the [Flags] Enum Attribute mean in C#?](https://stackoverflow.com/questions/8447/what-does-the-flags-enum-attribute-mean-in-c)
```
[Flags]
public enum PossibleValues { A = 1, B = 2, C = 4 }
var foo = PossibleValues.A | PossibleValues.B | PossibleValues.C;
```
Upvotes: 2 <issue_comment>username_2: >
> However, is there any way a variable could take more than one value, based on Boolean operators?
>
>
>
Sure! Let's implement it. We'll use the `ImmutableHashSet` type from `System.Collections.Immutable`.
```
struct MySet
{
public readonly static MySet Empty = default(MySet);
private ImmutableHashSet items;
private MySet(ImmutableHashSet items) => this.items = items;
public ImmutableHashSet Items => this.items ?? ImmutableHashSet.Empty;
public MySet Add(T item) => new MySet(this.Items.Add(item));
public static MySet operator |(T item, MySet items) => items.Add(item);
public static MySet operator |(MySet items, T item) => items.Add(item);
public static MySet operator |(MySet x, MySet y) => new MySet(x.Items.Union(y.Items));
public static MySet operator &(MySet items, T item) => new MySet(items.Items.Contains(item) ? ImmutableHashSet.Empty.Add(item) : ImmutableHashSet.Empty);
public static MySet operator &(T item, MySet items) => new MySet(items.Items.Contains(item) ? ImmutableHashSet.Empty.Add(item) : ImmutableHashSet.Empty);
public static MySet operator &(MySet x, MySet y) => new MySet(x.Items.Intersect(y.Items));
}
```
Now we can create a variable that contains multiple values of any type, and obeys the laws of `|` and `&`:
```
var items1 = MySet.Empty | "Hello" | "Goodbye" | "Whatever";
var items2 = MySet.Empty | "Goodbye" | "Hello" | "Blah";
var items3 = items1 & items2;
var items4 = items1 | items2;
Console.WriteLine(String.Join(" ", items3.Items)); // "Hello Goodbye"
Console.WriteLine(String.Join(" ", items4.Items)); // "Hello Goodbye Whatever Blah"
```
Upvotes: 3
|
2018/03/15
| 834 | 2,903 |
<issue_start>username_0: My Perl script accepts and processes input from a text field in a form on a web page. It was written for the English version of the web page and works just fine.
There is also a Chinese version of the page (a separate page, not both languages on the same page), and now I need my script to work with that. The user input on this page is expected to be in Chinese.
Expecting to need to work in UTF-8, I added
```
use utf8;
```
This continues to function just fine on the English page.
But in order to, for example, define a string variable for comparison that uses Chinese characters, I have to save the Perl script itself with utf-8 encoding. As soon as I do that, I get the dreaded 500 server error.
Clearly I'm going about this wrong and any helpful direction will be greatly appreciated/
Thanks.
EDIT - please see my clarification post below.<issue_comment>username_1: Define an enum with the [Flags] attribute.
Reference: [What does the [Flags] Enum Attribute mean in C#?](https://stackoverflow.com/questions/8447/what-does-the-flags-enum-attribute-mean-in-c)
```
[Flags]
public enum PossibleValues { A = 1, B = 2, C = 4 }
var foo = PossibleValues.A | PossibleValues.B | PossibleValues.C;
```
Upvotes: 2 <issue_comment>username_2: >
> However, is there any way a variable could take more than one value, based on Boolean operators?
>
>
>
Sure! Let's implement it. We'll use the `ImmutableHashSet` type from `System.Collections.Immutable`.
```
struct MySet
{
public readonly static MySet Empty = default(MySet);
private ImmutableHashSet items;
private MySet(ImmutableHashSet items) => this.items = items;
public ImmutableHashSet Items => this.items ?? ImmutableHashSet.Empty;
public MySet Add(T item) => new MySet(this.Items.Add(item));
public static MySet operator |(T item, MySet items) => items.Add(item);
public static MySet operator |(MySet items, T item) => items.Add(item);
public static MySet operator |(MySet x, MySet y) => new MySet(x.Items.Union(y.Items));
public static MySet operator &(MySet items, T item) => new MySet(items.Items.Contains(item) ? ImmutableHashSet.Empty.Add(item) : ImmutableHashSet.Empty);
public static MySet operator &(T item, MySet items) => new MySet(items.Items.Contains(item) ? ImmutableHashSet.Empty.Add(item) : ImmutableHashSet.Empty);
public static MySet operator &(MySet x, MySet y) => new MySet(x.Items.Intersect(y.Items));
}
```
Now we can create a variable that contains multiple values of any type, and obeys the laws of `|` and `&`:
```
var items1 = MySet.Empty | "Hello" | "Goodbye" | "Whatever";
var items2 = MySet.Empty | "Goodbye" | "Hello" | "Blah";
var items3 = items1 & items2;
var items4 = items1 | items2;
Console.WriteLine(String.Join(" ", items3.Items)); // "Hello Goodbye"
Console.WriteLine(String.Join(" ", items4.Items)); // "Hello Goodbye Whatever Blah"
```
Upvotes: 3
|
2018/03/15
| 303 | 1,159 |
<issue_start>username_0: I try to use Microsoft Graph API in my web. Everything looks like working well on my pc but when my teammate clones the project into her computer, there shows "An error occurred while processing your request." and let us open development mode to see more information.
[error message fig](https://i.stack.imgur.com/OaXKT.png)
We did what it said adding the environment variable. But we still cannot figure out how to obtain the error and what is the error.<issue_comment>username_1: In your `startup.cs` you should see a function named `Configure`:
```
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
```
In this function, you need to wire up the Developer Exception Page:
```
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: in dotnet 6 in program.cs after initialize builder; you can use this code
```
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
//Develop Mode
}
else
{
//Live Mode
}
```
Upvotes: 0
|
2018/03/15
| 815 | 3,145 |
<issue_start>username_0: I am working on a Cloudformation template for an IAM role that grants cross account read only access. It uses a managed policy for Readonly access as well. So far, I've resolved several errors, but now I'm getting a "'null' values are not allowed in templates" error when I try to validate the template. I think it's a space or syntax thing, but I cannot be sure as it's my first time creating a cloudformation template from scratch and using YAML.
```
AWSTemplateFormatVersion: '2010-09-09'
Description: AWS CloudFormation template IAM Role for New Relic to have read access to AWS account
Resources:
NewRelicInfrastructure-IntegrationsRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
Effect: Allow
Principal:
AWS: 11111111
Action: sts:AssumeRole
Condition:
StringEquals:
sts:ExternalId: '11111'
Path: '/'
ManagedPolicyArns: arn:aws:iam::aws:policy/ReadOnlyAccess
RoleName: NewRelicInfrastructure-Integrations2
```<issue_comment>username_1: The problem is with `AssumeRolePolicyDocument:`. It's required but you left it empty. You also have an indentation issue where `Path`, `ManagedPolicyArns` and `RoleName` are under `Resources` instead of `Properties`.
Try:
```
AWSTemplateFormatVersion: '2010-09-09'
Description: AWS CloudFormation template IAM Role for New Relic to have read access to AWS account
Resources:
NewRelicInfrastructure-IntegrationsRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
Effect: Allow
Principal:
AWS: 11111111
Action: sts:AssumeRole
Condition:
StringEquals:
sts:ExternalId: '11111'
Path: '/'
ManagedPolicyArns: arn:aws:iam::aws:policy/ReadOnlyAccess
RoleName: NewRelicInfrastructure-Integrations2
```
Upvotes: 4 <issue_comment>username_2: Indentation fixed, it was specifying something in AssumeRolePolicyDocument, but the YAML syntac wasn't correct, this worked:
```
AWSTemplateFormatVersion: '2010-09-09'
Description: AWS CloudFormation template IAM Role for New Relic to have read access to AWS account
Resources:
NewRelicInfrastructureIntegrationsRole:
Type: AWS::IAM::Role
Properties:
Path: '/managed/'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/ReadOnlyAccess'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
-
Action: sts:AssumeRole
Effect: Allow
Principal:
AWS: 1111111111111
Condition:
StringEquals:
sts:ExternalId: '11111'
RoleName: NewRelicInfrastructureIntegrationsRole
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Use YAML interpreter online to show you where you might be getting a null value in your yaml file. They're hard to spot as a wrong indentation can result in a null value - the yaml interpreter will show you in json where you're getting that value.
Upvotes: 2
|
2018/03/15
| 451 | 1,907 |
<issue_start>username_0: I'm trying to understand if/how I can use Cognito as an OCID / IdP but with my own "skin" to it. Using the Cognito CSS customization is not nearly sufficient.
To be more precise, I am trying to build an SSO front-end that leverages Cognito's UserPools, Access/Refresh tokens, Device trusting/etc. I have some specific requirements for i18n and additional authentication pages for MFA (would use the Cognito Custom Auth flows), but I cannot seem to understand how to tie these pieces together.
From what I can tell, if I use the Cognito SDK to build my SSO front end, then I am basically logging myself into my own SSO app, but then cannot use the generated credentials in an OCID flow. And if I try to point my external applications to Cognito, then I am forced to use the hosted UI which does not meet my requirements at all.
How do I go about creating an SSO app that leverages Cognito? Do I need to use Federated Identities somehow for this? Or AWS STS (not even sure how that would come into play). I looked at the idea of wrapping some of the Cognito calls via the API Gateway, but that leads me down a rabbit hole where I am basically rewriting a server to implement the OAuth2/OCID spec, which makes absolutely no sense.
Is there anyway to use Cognito to provide an single SSO experience across all my apps? I do not need to support cross-domain apps, but do need the solution to support SPA, classic session-based server webapps and native mobile apps.<issue_comment>username_1: Have you tried aws amplify lib for react <https://aws.github.io/aws-amplify/>
Upvotes: 0 <issue_comment>username_2: From what I have been able to gather, there is no way at present to create custom Login/MFA/Reset Password pages when using the OIDC flow. The only way is to use the supplied pages by Cognito which means no significant customization is possible.
Upvotes: 2 [selected_answer]
|
2018/03/15
| 847 | 2,614 |
<issue_start>username_0: I am able to touch a file date created no problem.
But i want set access date or modified date
<http://php.net/manual/en/function.touch.php>
```
$access = $date; //now
```
>
> **atime**
>
> If present, the access time of the given filename is set to the value of
> atime. Otherwise, it is set to the value passed to the time parameter. If
> neither are present, the current system time is used.
>
>
>
OK, where do you put that? `atime`?
```
touch($filename, $access, atime)
```
tried for hours
I read
```
bool touch ( string $filename [, int $time = time() [, int $atime ]] )
```
but seems overcomplicated
I read the examples on php.net but not have `atime` working<issue_comment>username_1: In the PHP documentation, parameters within square brackets [] are optional.
You can call touch with just the filename:
```
touch($filename);
```
Which will change the access and midification time to the current systemtime.
If you want to set it to a specific time, you would call:
```
touch($filename, $time);
```
If you want to set the modified time to a different value than access time, you would call:
```
touch($filename, $modifiedTime, $accessTime);
```
I hope this helps!
Upvotes: 1 <issue_comment>username_2: `atime` is used to change the "last access time of file":
With no parameter:
```
$filename = "test.test";
touch($filename);
echo filemtime($filename), PHP_EOL; // 1521149547 // now
echo fileatime($filename), PHP_EOL; // 1521149547 // now
```
With `time`:
```
$filename = "test.test";
touch($filename, time() - 3600);
echo filemtime($filename), PHP_EOL; // 1521145965 // 1 hour ago
echo fileatime($filename), PHP_EOL; // 1521145965 // same as mtime
```
With `atime`:
```
$filename = "test.test";
touch($filename, time() - 3600, time() - 7200);
echo filemtime($filename), PHP_EOL; // 1521145974 // 1 hour ago
echo fileatime($filename), PHP_EOL; // 1521142374 // 2 hours ago
```
If you don't want to change the "modification date", you can use (this will not change the creation time):
```
touch($filename, filemtime($filename), $accesstime);
```
Upvotes: 2 <issue_comment>username_3: Now I see,
`atime` key word in the list of *Parameters* is definition of the `$atime` variable in the *Description*
So in your example of `touch($filename, $access, atime)` you are missing the dollar char.
Correct usage is:
```
touch($filename, $access, $atime);
```
Update:
To edit only access time then set the modification time the same as the file actually has:
```
touch($filename, filemtime($filename), $atime);
```
Upvotes: 0
|
2018/03/15
| 695 | 2,212 |
<issue_start>username_0: I would like to know the most effective way to compare long strings, having a database sql server, such as:
long string -> compare to-> 5,000 records in database
I happened to convert the records and the string to compare, using crc32, but I'm not sure if it would be a more efficient method<issue_comment>username_1: In the PHP documentation, parameters within square brackets [] are optional.
You can call touch with just the filename:
```
touch($filename);
```
Which will change the access and midification time to the current systemtime.
If you want to set it to a specific time, you would call:
```
touch($filename, $time);
```
If you want to set the modified time to a different value than access time, you would call:
```
touch($filename, $modifiedTime, $accessTime);
```
I hope this helps!
Upvotes: 1 <issue_comment>username_2: `atime` is used to change the "last access time of file":
With no parameter:
```
$filename = "test.test";
touch($filename);
echo filemtime($filename), PHP_EOL; // 1521149547 // now
echo fileatime($filename), PHP_EOL; // 1521149547 // now
```
With `time`:
```
$filename = "test.test";
touch($filename, time() - 3600);
echo filemtime($filename), PHP_EOL; // 1521145965 // 1 hour ago
echo fileatime($filename), PHP_EOL; // 1521145965 // same as mtime
```
With `atime`:
```
$filename = "test.test";
touch($filename, time() - 3600, time() - 7200);
echo filemtime($filename), PHP_EOL; // 1521145974 // 1 hour ago
echo fileatime($filename), PHP_EOL; // 1521142374 // 2 hours ago
```
If you don't want to change the "modification date", you can use (this will not change the creation time):
```
touch($filename, filemtime($filename), $accesstime);
```
Upvotes: 2 <issue_comment>username_3: Now I see,
`atime` key word in the list of *Parameters* is definition of the `$atime` variable in the *Description*
So in your example of `touch($filename, $access, atime)` you are missing the dollar char.
Correct usage is:
```
touch($filename, $access, $atime);
```
Update:
To edit only access time then set the modification time the same as the file actually has:
```
touch($filename, filemtime($filename), $atime);
```
Upvotes: 0
|
2018/03/15
| 1,006 | 3,046 |
<issue_start>username_0: I have two tables: Classes and Courses.
Classes and Courses both have DEPT\_CODE and COURSE#.
When these are the same, and in the Classes table SEMESTER = Spring and YEAR = 2018, I concatenate DEPT\_CODE + COURSE# as COURSEID. I then pair it with TITLE from the Courses table.
Courses: DEPT\_CODE | COURSE# | TITLE
Classes: CLASSID | DEPT\_CODE | COURSE# | YEAR | SEMESTER
Select results: COURSEID (DEPT\_CODE + COURSE#) | TITLE
Here was my attempt, though I'm running into some errors.
```
SELECT dept_code + 'course#' AS courseid, title
FROM classes c1, courses c2
WHERE 'c1.course#' = 'c2.course#' AND year = 2018 AND semester LIKE 'Spring';
```
I was having some trouble using semester = Spring. With the code above, I get "No Results" even though there should be valid results in the tables.
An example I hope helps clear things up.
Classes
CLASSID | DEPT\_CODE | COURSE# | YEAR | SEMESTER
c001 | CS | 400 | 2018 | Spring
c002 | Math | 400 | 2018 | Spring
.
Courses
DEPT\_CODE | COURSE# | Title
CS | 400 | Databases
Math | 400 | Linear Algebra
.
Expected Output
COURSEID | Title
CS400 | Databases<issue_comment>username_1: In the PHP documentation, parameters within square brackets [] are optional.
You can call touch with just the filename:
```
touch($filename);
```
Which will change the access and midification time to the current systemtime.
If you want to set it to a specific time, you would call:
```
touch($filename, $time);
```
If you want to set the modified time to a different value than access time, you would call:
```
touch($filename, $modifiedTime, $accessTime);
```
I hope this helps!
Upvotes: 1 <issue_comment>username_2: `atime` is used to change the "last access time of file":
With no parameter:
```
$filename = "test.test";
touch($filename);
echo filemtime($filename), PHP_EOL; // 1521149547 // now
echo fileatime($filename), PHP_EOL; // 1521149547 // now
```
With `time`:
```
$filename = "test.test";
touch($filename, time() - 3600);
echo filemtime($filename), PHP_EOL; // 1521145965 // 1 hour ago
echo fileatime($filename), PHP_EOL; // 1521145965 // same as mtime
```
With `atime`:
```
$filename = "test.test";
touch($filename, time() - 3600, time() - 7200);
echo filemtime($filename), PHP_EOL; // 1521145974 // 1 hour ago
echo fileatime($filename), PHP_EOL; // 1521142374 // 2 hours ago
```
If you don't want to change the "modification date", you can use (this will not change the creation time):
```
touch($filename, filemtime($filename), $accesstime);
```
Upvotes: 2 <issue_comment>username_3: Now I see,
`atime` key word in the list of *Parameters* is definition of the `$atime` variable in the *Description*
So in your example of `touch($filename, $access, atime)` you are missing the dollar char.
Correct usage is:
```
touch($filename, $access, $atime);
```
Update:
To edit only access time then set the modification time the same as the file actually has:
```
touch($filename, filemtime($filename), $atime);
```
Upvotes: 0
|
2018/03/15
| 357 | 1,244 |
<issue_start>username_0: Simple question: I have a dataframe in dask containing about 300 mln records. I need to know the exact number of rows that the dataframe contains. Is there an easy way to do this?
When I try to run `dataframe.x.count().compute()` it looks like it tries to load the entire data into RAM, for which there is no space and it crashes.<issue_comment>username_1: ```
# ensure small enough block size for the graph to fit in your memory
ddf = dask.dataframe.read_csv('*.csv', blocksize="10MB")
ddf.shape[0].compute()
```
From the [documentation](https://docs.dask.org/en/latest/generated/dask.dataframe.read_csv.html#dask.dataframe.read_csv):
>
> blocksize
> Optional Number of bytes by which to cut up
> larger files. Default value is computed based on available physical
> memory and the number of cores, up to a maximum of 64MB. Can be a
> number like 64000000` or a string like ``"64MB". If None, a single
> block is used for each file.
>
>
>
Upvotes: 4 <issue_comment>username_2: If you only need the number of rows -
you can load a subset of the columns while selecting the columns with lower memory usage (such as category/integers and not string/object), there after you can run `len(df.index)`
Upvotes: 1
|