date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/16
| 599 | 1,445 |
<issue_start>username_0: I have an array of arrays `[[1,2,3], [4,5,6], [7,8,9], [10,11,12]]`
I want to merge elements like this `[[1,2,3,4,5,6], [7,8,9,10,11,12]]`. But have no idea how...<issue_comment>username_1: You can try the following (works only with even number of elements)
```js
var arr = [[1,2,3], [4,5,6], [7,8,9], [10,11,12]];
var result = [];
for(var i = 0; i < arr.length; i += 2) {
result.push(arr[i].concat(arr[i+1]));
}
console.log(result);
```
Upvotes: 2 <issue_comment>username_2: You could reduce the array and use the index with bitshifting for the right new index of the result array.
```js
var array = [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]],
combined = array.reduce((r, a, i) =>
Object.assign(r, { [i >> 1]: [...(r[i >> 1] || []), ...a] }), []);
console.log(combined);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 1 <issue_comment>username_3: A generic solution that merges any number of consequent arrays:
```js
let range = n => [...Array(n).keys()];
let chunks = (a, n) => range(Math.ceil(a.length / n))
.map(i => a.slice(i * n, (i + 1) * n));
let mergeEvery = (a, n) => chunks(a, n).map(g => [].concat(...g));
a = [[1,2,3], [4], [5,6], [7,8,9], [10,11,12], [13,14], [15]]
console.log(JSON.stringify(mergeEvery(a, 2)))
console.log(JSON.stringify(mergeEvery(a, 3)))
console.log(JSON.stringify(mergeEvery(a, 4)))
```
Upvotes: 1
|
2018/03/16
| 785 | 2,677 |
<issue_start>username_0: I'm facing a weird issue with the `pow()`. Please check the following function
```
let r = 8.5/1200
let n = ((5*12.0)+1)
let pp = (pow(Float(1 + r), Float(n)))
debugPrint(pp) // 1.53811562
let pq = (pow((1 + r), n))
debugPrint(pq) // 1.5381189771003985
```
Here `pow(Float, Float)` function return max. 8 fractional number for `Float` and `pow(Double , Double)` returns value with 16 fractional number for `Double`.
Why it is so could anyone explain it?
And is it possible to get upto 16 fractional number for `Float` value?
>
> Note: You can copy and paste it in playground to see result.
>
>
>
Thanks in advance.<issue_comment>username_1: Please look up the difference between [Double and Float](https://en.wikipedia.org/wiki/Double-precision_floating-point_format).
Double is called double because it has double precision compared to float.
To get the same result in your calculation just cast Double to Float or vice versa, depending on what you want to achieve.
Upvotes: 1 <issue_comment>username_2: In swift
Float − This is used to represent a **32-bit** floating-point number and numbers with smaller decimal points. For example, 3.12345678
Double − This is used to represent a **64-bit** floating-point number and used when floating-point values must be very large. For example, 3.123456789123456
That could be a reason for that
According to Apple Docs
>
> Double has a precision of at least 15 decimal digits, whereas the precision of Float can be as little as 6 decimal digits. The appropriate floating-point type to use depends on the nature and range of values you need to work with in your code. In situations where either type would be appropriate, Double is preferred
>
>
>
And
No , **You can't have 16 fractional number for Float value.**
Upvotes: 1 <issue_comment>username_3: `Float` uses an [IEEE 754 four byte single precision number](https://en.wikipedia.org/wiki/IEEE_754#Basic_and_interchange_formats). 24 bits are assigned to the significand and 8 bits to the exponent. With 24 bits of precision, it can represent a little over 7 decimal digits of precision
`Double` - which is the default floating point format for Swift uses an IEEE double precision eight byte number. The significand uses 53 bits and the exponent 11 bits. With 53 bits of precision, it can represent just under 16 decimal digits of precision.
The reason why `pq` is printed with 16 digits is that by default the compiler has inferred the argument type to be `Double`.
>
> And is it possible to get upto 16 fractional number for Float value?
>
>
>
No. There aren't enough bits of precision.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 238 | 760 |
<issue_start>username_0: ```
```
this give
>
> undefined
>
>
>
I'm using `MVC CustomHtmlHelper` to create `textbox(Because we need to create a dynamic form basis on the database)`,the control get render perfectly but "Emp.Name" is getting undefined.
Since the object is declare in TS.
```
```
This work fine
I want to know how can we dynamically define object and its property.
I'm using `angular 2` and `MVC5`.<issue_comment>username_1: Like this :
```
```
If your have declared the correct variables, such as
```
dt = {Rows: [{ name: John }]};
columnname = 'name';
```
Then you will have
```
[(ngModel)]="EmpJohn"
```
Upvotes: 0 <issue_comment>username_2: **Emp.name** can be written as **Emp['Name']**. Try,
```
```
Upvotes: 1
|
2018/03/16
| 291 | 1,100 |
<issue_start>username_0: When executing ./assembleRelease from my react-native project's android folder project, I get the following error:
```
12:47:38.876 [ERROR] [org.gradle.BuildExceptionReporter] Failed to capture snapshot of input files for task 'bundleReleaseJsAndAssets' during up-to-date check.
12:47:38.876 [ERROR] [org.gradle.BuildExceptionReporter] > java.io.FileNotFoundException: /Users/michiel/Sites/Erbij/erbij/node_modules/react-native/undefined (Operation not supported on socket)
```
iOS releases are fine and local development werkt for both Android and iOS. Does anybody have an idea where to start search? Tried running with `--verbose`, `--info` and `--debug`, no luck.
I've also tried stripping my entire javascript code, and have App.js be a simple component that renders and empty view.<issue_comment>username_1: Never mind - I ran react-native-git-upgrade which must've removed some cache files, because there are no changes showing up in git
Upvotes: 0 <issue_comment>username_2: I solve this error by deleting a file "node\_modules/react-native/undefined"
Upvotes: 3
|
2018/03/16
| 724 | 2,652 |
<issue_start>username_0: I am working on a webtable that doesn't have an ID or a class name. This is how the table looks like in HTML view
```
| |
| --- |
| |
```
and so on
how do i refer this table in selenium web driver?<issue_comment>username_1: Finding the element
===================
When you neither have a class name nor an id to uniquely identify an element, there is XPath. It describes, where an element within the DOM is and is able to select it. An XPath string might look like this:
```
html/body/div[1]/section/div[1]/div/div/div/div[1]/div/div/div/div/div[3]/div[1]/div/h4[1]/b
```
For more info, see e.g. [XPath in Selenium WebDriver: Complete Tutorial](https://www.guru99.com/xpath-selenium.html)
How to get the XPath to your table?
===================================
In Firefox, you can install the [FirePath](https://addons.mozilla.org/en-US/firefox/addon/firepath/?src=search) addon. This adds a tab in the developer tools. Simply select the element and copy the XPath.
For Chrome there is [XPath Helper](https://chrome.google.com/webstore/detail/xpath-helper/hgimnogjllphhhkhlmebbmlgjoejdpjl?hl=en) which basically does the same.
Accessing the element in your code?
===================================
Now that you have the element, you need to access it in your code. The following **Java** code will get you the element (and click on it; [Source](https://stackoverflow.com/a/10567017/1392490)):
```java
d.findElement(By.xpath("")).click();
```
If you are using **Python**, the code looks a little different ([Source](http://selenium-python.readthedocs.io/locating-elements.html)):
```py
from selenium.webdriver.common.by import By
driver.find_element(By.XPATH, '//button[text()="Some text"]')
```
Upvotes: 1 <issue_comment>username_2: Adding to @username_1 detailed answer:
1. You can also use CssSelector
2. Even though you can have FirePath or Chrome generate the XPath (or CssSelector) for you, often it's not the best option, as this XPath is not necessarily the most maintainable one. Note that there are many XPath expressions that can refer to the same element. This means that it may become invalid due to application changes more often than others. The rule of thumb is to use the expression that relies on the least amount of details that are likely to change.
3. Assuming you're using Selenium for test automaton (and not any other site scraping goal), then the best option is to ask the developers to add an id attribute to the table element, or a unique class to be used specifically by your test automaton. This way you can rely on a single detail which is very unlikely to change.
Upvotes: 0
|
2018/03/16
| 1,840 | 5,626 |
<issue_start>username_0: I'm trying to connect celery with a rabbitMQ broker using SSL certificates.
This is the code:
```
from celery import Celery
import ssl
broker_uri = 'amqp://user:pwd@server:5672/vhost'
certs_conf = {
"ca_certs": "/certs/serverca/cacert.pem",
"certfile": "/certs/client/rabbit-cert.pem",
"keyfile": "/certs/client/rabbit-key.pem",
"cert_reqs": ssl.CERT_REQUIRED
}
app = Celery('tasks', broker=broker_uri)
app.conf.update(BROKER_USE_SSL=certs_conf)
app.send_task('task.name', [{'a': 1}])
```
When I try to execute this code i get the following exception:
```
Traceback (most recent call last):
File "C:\Python36\lib\site-packages\kombu\utils\functional.py", line 36, in __call__
return self.__value__
AttributeError: 'ChannelPromise' object has no attribute '__value__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test_send_task.py", line 44, in
app.send\_task('task.name', [message])
File "C:\Python36\lib\site-packages\celery\app\base.py", line 737, in send\_task
amqp.send\_task\_message(P, name, message, \*\*options)
File "C:\Python36\lib\site-packages\celery\app\amqp.py", line 558, in send\_task\_message
\*\*properties
File "C:\Python36\lib\site-packages\kombu\messaging.py", line 181, in publish
exchange\_name, declare,
File "C:\Python36\lib\site-packages\kombu\connection.py", line 494, in \_ensured
return fun(\*args, \*\*kwargs)
File "C:\Python36\lib\site-packages\kombu\messaging.py", line 187, in \_publish
channel = self.channel
File "C:\Python36\lib\site-packages\kombu\messaging.py", line 209, in \_get\_channel
channel = self.\_channel = channel()
File "C:\Python36\lib\site-packages\kombu\utils\functional.py", line 38, in \_\_call\_\_
value = self.\_\_value\_\_ = self.\_\_contract\_\_()
File "C:\Python36\lib\site-packages\kombu\messaging.py", line 224, in
channel = ChannelPromise(lambda: connection.default\_channel)
File "C:\Python36\lib\site-packages\kombu\connection.py", line 819, in default\_channel
self.ensure\_connection()
File "C:\Python36\lib\site-packages\kombu\connection.py", line 405, in ensure\_connection
callback)
File "C:\Python36\lib\site-packages\kombu\utils\functional.py", line 333, in retry\_over\_time
return fun(\*args, \*\*kwargs)
File "C:\Python36\lib\site-packages\kombu\connection.py", line 261, in connect
return self.connection
File "C:\Python36\lib\site-packages\kombu\connection.py", line 802, in connection
self.\_connection = self.\_establish\_connection()
File "C:\Python36\lib\site-packages\kombu\connection.py", line 757, in \_establish\_connection
conn = self.transport.establish\_connection()
File "C:\Python36\lib\site-packages\kombu\transport\pyamqp.py", line 130, in establish\_connection
conn.connect()
File "C:\Python36\lib\site-packages\amqp\connection.py", line 288, in connect
self.drain\_events(timeout=self.connect\_timeout)
File "C:\Python36\lib\site-packages\amqp\connection.py", line 471, in drain\_events
while not self.blocking\_read(timeout):
File "C:\Python36\lib\site-packages\amqp\connection.py", line 477, in blocking\_read
return self.on\_inbound\_frame(frame)
File "C:\Python36\lib\site-packages\amqp\method\_framing.py", line 55, in on\_frame
callback(channel, method\_sig, buf, None)
File "C:\Python36\lib\site-packages\amqp\connection.py", line 481, in on\_inbound\_method
method\_sig, payload, content,
File "C:\Python36\lib\site-packages\amqp\abstract\_channel.py", line 128, in dispatch\_method
listener(\*args)
File "C:\Python36\lib\site-packages\amqp\connection.py", line 368, in \_on\_start
b", ".join(self.mechanisms).decode()))
amqp.exceptions.ConnectionError: Couldn't find appropriate auth mechanism (can offer: AMQPLAIN, PLAIN; available: EXTERNAL)
```
Executing the same code without the ssl configuration works just well. What I'm missing?
I can send messages to the broker using pika configured with SSL, but I can't manage to properly configure Celery to send messages to the same broker with SSL.
Thanks in advance.<issue_comment>username_1: Try using the setting:
```
broker_use_ssl=True
```
You can also use the broker url starting with **amqps://...**
Upvotes: 0 <issue_comment>username_2: you need to enable this plugin: <https://github.com/rabbitmq/rabbitmq-auth-mechanism-ssl>
and add following config to /etc/rabbitmq/rabbitmq.conf
```
auth_mechanisms.1 = PLAIN
auth_mechanisms.2 = AMQPLAIN
auth_mechanisms.3 = EXTERNAL
```
if you only want the ssl connection, then add EXTERNAL only.
Upvotes: 0 <issue_comment>username_3: Your server may already support SSL client authentication if it's offering the EXTERNAL authentication mechanism. For the Celery client however you need an additional configuration option to use EXTERNAL (i.e. SSL) authentication:
```
app.conf.broker_login_method = 'EXTERNAL'
```
For completeness, a valid celery configuration snippet would look like:
```
...
import ssl
app = Celery("some_name")
app.conf.broker_url = 'amqps://rabbitmq.example.com:5671/vhostname'
app.conf.broker_use_ssl = {
'keyfile': r'C:\path\to\private\box1-nopass.key.pem',
'certfile': r'C:\path\to\certs\box1.cert.pem',
'ca_certs': r'C:\path\to\ca-chain.cert.pem',
'cert_reqs': ssl.CERT_REQUIRED
}
app.conf.broker_login_method = 'EXTERNAL'
...
```
Note, there's no username:password in the `broker_url` as the username is determined by attributes of the client certificate (and the user must be pre-existing on the RabbitMQ server, configured with "no password") and the default SSL port is 5671 (not 5672).
Upvotes: 1
|
2018/03/16
| 548 | 1,741 |
<issue_start>username_0: Have a look at this jsbin <https://jsbin.com/dipater/edit?html,css,output>
I want the span to appear on line 2 in both cases (container 1 and 2). How can this be achieved?
```
x
```<issue_comment>username_1: Try using the setting:
```
broker_use_ssl=True
```
You can also use the broker url starting with **amqps://...**
Upvotes: 0 <issue_comment>username_2: you need to enable this plugin: <https://github.com/rabbitmq/rabbitmq-auth-mechanism-ssl>
and add following config to /etc/rabbitmq/rabbitmq.conf
```
auth_mechanisms.1 = PLAIN
auth_mechanisms.2 = AMQPLAIN
auth_mechanisms.3 = EXTERNAL
```
if you only want the ssl connection, then add EXTERNAL only.
Upvotes: 0 <issue_comment>username_3: Your server may already support SSL client authentication if it's offering the EXTERNAL authentication mechanism. For the Celery client however you need an additional configuration option to use EXTERNAL (i.e. SSL) authentication:
```
app.conf.broker_login_method = 'EXTERNAL'
```
For completeness, a valid celery configuration snippet would look like:
```
...
import ssl
app = Celery("some_name")
app.conf.broker_url = 'amqps://rabbitmq.example.com:5671/vhostname'
app.conf.broker_use_ssl = {
'keyfile': r'C:\path\to\private\box1-nopass.key.pem',
'certfile': r'C:\path\to\certs\box1.cert.pem',
'ca_certs': r'C:\path\to\ca-chain.cert.pem',
'cert_reqs': ssl.CERT_REQUIRED
}
app.conf.broker_login_method = 'EXTERNAL'
...
```
Note, there's no username:password in the `broker_url` as the username is determined by attributes of the client certificate (and the user must be pre-existing on the RabbitMQ server, configured with "no password") and the default SSL port is 5671 (not 5672).
Upvotes: 1
|
2018/03/16
| 892 | 2,601 |
<issue_start>username_0: I have what seems like a simple question but for which I can find no straightforward answer. I would like to write a function that takes two strings as input and gives an integer as output.
In R, the function would be as simple as:
```
utc_seconds = function(date_string, tz) as.integer(as.POSIXct(date_string, tz = tz))
```
I am in control of `date_string` and know the format will always be proper, e.g. `2018-02-11 00:00:00`, and I also know that `tz` will always be in [Olson format](https://en.wikipedia.org/wiki/Tz_database).
Example input/output:
```
utc_seconds('2018-02-11 00:00:00', tz = 'Asia/Singapore')
# 1518278400
```
I've looked at various combinations/permutations of `datetime`, `pytz`, `time`, etc, to no avail. [This](http://taaviburns.ca/presentations/what_you_need_to_know_about_datetimes/) table looked promising, but ultimately I couldn't figure out how to use it.
I've managed a "hack" as follows, but this feels inane (adding extraneous information to my input string):
```
from dateutil.parser import parse
from dateutil.tz import gettz
parse("2018-02-01 00:00:00 X", tzinfos={'X': gettz('Asia/Singapore')})
# datetime.datetime(2018, 2, 11, 0, 0, tzinfo=tzfile('/usr/share/zoneinfo/Asia/Singapore'))
```
But I can't get that to UTC time either.<issue_comment>username_1: With the nudge from @<NAME>, I've cobbled together the following:
```
from dateutil.parser import parse
from pytz import timezone, utc
from datetime import datetime
def utc_seconds(input, tz):
tz = timezone(tz)
dt = tz.localize(parse(input), is_dst = None)
return int((dt - datetime(1970, 1, 1, tzinfo = utc)).total_seconds())
utc_seconds('2018-02-11 00:00:00', 'Asia/Singapore')
# 1518278400
```
I also came up with the following alternative owing to the happy circumstance that my set-up is already tied into a Spark context:
```
def utc_seconds(input, tz):
query = "select unix_timestamp(to_utc_timestamp('{dt}', '{tz}'))" \
.format(dt = input, tz = tz)
return spark.sql(query).collect()[0][0]
```
(i.e., kick the can to a friendlier language and collect the result)
Upvotes: 0 <issue_comment>username_2: you can use datetime `timestamp` to get the epoch time
```
from datetime import datetime
import pytz
def utc_seconds(str_dt, timezone):
timezone = pytz.timezone(timezone)
dt = datetime.strptime(str_dt, '%Y-%m-%d %H:%M:%S')
dt_timezone = timezone.localize(dt)
return int(dt_timezone.timestamp())
utc_seconds('2018-02-11 00:00:00', 'Asia/Singapore')
# 1518278400
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 759 | 2,682 |
<issue_start>username_0: I have a few Crystal Reports on my website. In my local machine it is working, but in the test environment, in another server when trying to use the report I get the following problem:
[](https://i.stack.imgur.com/Ep3B8.png)
I can't imagine what can be the problem, because I'm using the same DDL's and configs on the test environment, too.
I even tried with the Process monitor, and I don't see any errors
[enter image description here](https://i.stack.imgur.com/MFvmn.png)<issue_comment>username_1: this is crystal report version problem. remove old version. reinstall new version.
for vs 2013 use CRforVS\_13\_0\_23.
Upvotes: 3 <issue_comment>username_2: after doing the changes Restart the whole iis Server that worked for me.
Upvotes: 1 <issue_comment>username_3: If you are using *IIS*:
* Go to the application pool
* Select your web site
* Click on Advanced Settings on the right side
* Set 'Enable 32-bit application' true
Crystal report runtime engine for .net 32-bit had to be installed.
Upvotes: 1 <issue_comment>username_4: My problem sorted with the following crystal version.
CRforVS\_redist\_install\_64bit\_13\_0\_20
First, uninstall any other crystal runtime version and install this again.
Upvotes: 1 <issue_comment>username_5: After installing CRRuntime\_64bit\_13\_0\_21.msi I started seeing this error using the Crystal Viewer within an ASP.NET web application. These reports previously worked using a prior version of the crystal runtime **so the quick fix is to uninstall 13.0.21 runtime and install CRRuntime\_64bit\_13\_0\_20.msi (or lower) in its place and reports will most likely work.**
This is known\*\* Crystal Reports issue when web applications are published to an environment that is running the Crystal runtime 13.0.21 or higher while your application's binaries were built using the earlier 13.0.20 or earlier runtime in your build environment. With the release of 13.0.21 and later you now need to have them both at the same level, either both at .20 or lower, or both at .21 of higher.
\*\* "As most of CR/RAS .NET Assemblies are now re-versioned from 13.0.2000.0 to 13.0.3500.0 and for SP 26 are now 13.0.4000.0, user MUST remove all old CR assemblies from Reference list and add the new version of CR assemblies, then rebuild the application." ref. <https://wiki.scn.sap.com/wiki/display/BOBJ/Crystal+Reports,+Developer+for+Visual+Studio+Downloads>
There may be one way around this, by using a special runtime section in the web.config but I haven't tested whether that will help this specific issue so I won't include it here.
Upvotes: 0
|
2018/03/16
| 1,114 | 3,542 |
<issue_start>username_0: I am trying to get content from Message in SNS event in node js lambda project
here is a code for processing message
```
exports.handler = (event, context, callback) => {
var message = event.Records[0].Sns.Message;
console.log('Message received from SNS:', message);
message.Events.forEach(element => {
console.log(element);
});
};
```
sample event:
```
{
"Records":
[
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "",
"Sns":
{
"Type": "Notification",
"MessageId": "bc86f105-c320",
"TopicArn": "arn:aws:sns:ewerwrewrw",
"Subject": "dwqweq23234",
"Message":
{
"Events":
[
{"EventTimestamp":"2018-03-16T10:51:22Z"},
{"EventTimestamp":"2018-03-16T10:51:22Z"}
],
"EventDocVersion":"2014-08-15"
},
"Timestamp": "2018-03-16T10:51:22.691Z",
"SignatureVersion": "1",
"Signature": "",
"SigningCertUrl": "",
"UnsubscribeUrl": "",
"MessageAttributes": {}
}
}
]
}
```
This is what I get in CloudWatch logs:
>
> Message received from SNS:
> {
> "Events":
> [
> {"EventTimestamp":"2018-03-16T10:51:22Z"},
> {"EventTimestamp":"2018-03-16T10:51:22Z"}
> ],
> "EventDocVersion":"2014-08-15"
> }
>
>
> TypeError: Cannot read property 'forEach' of undefined
> at exports.handler
>
>
>
Why I am not being able to parse 'Events' inside message object in event?<issue_comment>username_1: try **message["Events"].forEach** instead of message.Events may work and check weather property exists using.Actually **message.Events** this should work if you get the same object in console as you have mentioned but you can avoid error at least by checking the property.
----Edit----**if(message && message.hasOwnProperty('Events'))**
```
if(message && message.hasOwnProperty('Events')){
message.Events.forEach(element => {
console.log(element);
});
}
```
I think the message that you are getting is blank first try printing that because I tried below in browser worked properly:
```
var obj={
"Records":
[
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "",
"Sns":
{
"Type": "Notification",
"MessageId": "bc86f105-c320",
"TopicArn": "arn:aws:sns:ewerwrewrw",
"Subject": "dwqweq23234",
"Message":
{
"Events":
[
{"EventTimestamp":"2018-03-16T10:51:22Z"},
{"EventTimestamp":"2018-03-16T10:51:22Z"}
],
"EventDocVersion":"2014-08-15"
},
"Timestamp": "2018-03-16T10:51:22.691Z",
"SignatureVersion": "1",
"Signature": "",
"SigningCertUrl": "",
"UnsubscribeUrl": "",
"MessageAttributes": {}
}
}
]
};
var fun1 = function(event, context, callback){
var message = event.Records[0].Sns.Message;
console.log('Message received from SNS:', message);
console.log("starting")
message["Events"].forEach(element => {
console.log(element);
});
};
fun1(obj,'',function(){console.log("uu")})
```
Upvotes: 0 <issue_comment>username_2: worked after I fixed to this:
```
var message = event.Records[0].Sns.Message;
var msgJson = JSON.parse(message);
msgJson["Events"].forEach(element => { .....
```
Upvotes: 4 [selected_answer]
|
2018/03/16
| 352 | 1,448 |
<issue_start>username_0: How can we hide the tool bar entries in eclipse from plugin as shown in the image.
[](https://i.stack.imgur.com/xgov2.png)
In the tool bar how can we hide only the perspective as shown in figure.
[](https://i.stack.imgur.com/objNt.png)<issue_comment>username_1: That menu item (which toggles the toolbar on/off) uses a command with the id `org.eclipse.ui.ToggleCoolbarAction`. So you need to execute that command.
You can execute a command using the `IHandlerService`:
```java
IHandlerService handlerService = PlatformUI.getWorkbench().getService(IHandlerService.class);
handlerService.executeCommand("org.eclipse.ui.ToggleCoolbarAction", null);
```
Upvotes: 1 <issue_comment>username_2: this helped me in hiding perspective toolbar entries.
```
IWorkbenchWindow window = PlatformUI.getWorkbench()
.getActiveWorkbenchWindow();
if (window instanceof WorkbenchWindow) {
MTrimBar topTrim = ((WorkbenchWindow) window).getTopTrim();
for (MTrimElement element : topTrim.getChildren()) {
if ("PerspectiveSwitcher".equals(element.getElementId())) {
element.setVisible(false);
break;
}
}
```
Upvotes: 1 [selected_answer]
|
2018/03/16
| 514 | 2,039 |
<issue_start>username_0: I've problem after update my Chrome browser to Version 65.0.3325.162 (latest)
When my tests start after done each method in task manager there appear extra zombie Chrome process which take many resources of CPU.
[](https://i.stack.imgur.com/mJoc0.png)
IS there any change with method driver.quit() on Chrome 65? I will add that, on previous version of Chrome browser all was OK.
I use data provider so using method quit() is necessary for correctly working on my test suite.
I use mothod terminate() to close browser after each test class.
My stuff:
Windows 10
Selenium WebDriver
ChromeDriver 2.36
Selenium WebDriver 2.20
```
@AfterClass(alwaysRun = true)
protected void terminate() {
if (browser != null) {
try {
browser.quit();
browser = null;
} catch (UnreachableBrowserException ex) {
TestReporter.log(ex.getMessage());
} catch (NoSuchSessionException noSuchSessionException) {
TestReporter.log("Tried to quit browser with NULL session: " + noSuchSessionException.getMessage());
}
}
if (application != null) {
application = null;
}
}
```<issue_comment>username_1: Update to new version of Chrome browser resolved my problem. It seems that Chrome 65.0.3325.162 had issue, which caused creation many zombie Chrome processes.
Upvotes: 1 <issue_comment>username_2: 1) Get the driver as singleton
```
@Singleton
class BrowserInstance {
ChromeDriver getDriver(){
ChromeOptions options = new ChromeOptions()
options.addArguments("--headless --disable-gpu")
return new ChromeDriver(options)
}
}
```
2) Use Close and quit in finally block
```
finally {
chromeDriver.close()
chromeDriver.quit()
}
```
Result: you will be using only one instance at time and if you see task manager you will not find chromedriver and chrome process hanging.
Upvotes: -1
|
2018/03/16
| 1,401 | 3,910 |
<issue_start>username_0: There is a website that provides referral points to the signed-in user. In order to get the points, people are using genuine email IDs (say in Gmail) with a single dot added in various places of the email address. Pasted below are some samples of this case. If this kind of data is present already in the MySQL database by various other users, how to identify this. My SQL Query or PHP code snippet will be helpful.
```
si.t.i.a.m.i.n192.168.127.12 <EMAIL>
<EMAIL>.m.<EMAIL>
<EMAIL>.m.i.n1.45.2.2<EMAIL>
<EMAIL>
<EMAIL>
```
There are more different email ids like that.
EDIT : Requesting the downvoters to provide the reason also.<issue_comment>username_1: If it's "si.t.i.a.m.i.n1" you're looking for, there's a very easy solution within MySql.
select \* from {table name} where email like '%si.t.i.a.m.i.n1%'
I would reccomend to base on IP adresses and not email adresses
Upvotes: -1 <issue_comment>username_2: You could use [`similar_text()`](https://php.net/similar_text) to check if a string is similar to another.
The function below, will return `true` if `$to_test` has any entry similar at 80% or more.
```
$values = [
'si.t.i.a.m.i.n192.168.127.12 <EMAIL>',
'<EMAIL> <EMAIL>',
's<EMAIL>1.45.2.2<EMAIL>',
'si.t.i.<EMAIL>',
'<EMAIL>',
];
function has_similar($to_test, $values, $similar = 80) {
$perc = 0 ;
foreach ($values as $key => $value) {
similar_text($value, $to_test, $perc);
if ($perc > $similar) return true ;
}
return false ;
}
var_dump(has_similar('<EMAIL>', $values)); // true
var_dump(has_similar('<EMAIL>', $values)); // false
```
Will outputs:
```
bool(true)
bool(false)
```
Upvotes: 1 <issue_comment>username_3: If you have a variable in php
```
$email = '<EMAIL>';
$email = explode('@', $email);
```
then query is like this
```
$sql = 'SELECT email
FROM user
WHERE
REPLACE(SUBSTRING_INDEX(email, "@", 1), ".", "") = "'.$email[0].'"';
```
**updated as user requested, only SQL to search duplicated email**
```
SELECT CONCAT(REPLACE(SUBSTRING_INDEX(email, '@', 1), '.', ''), '@',
SUBSTRING_INDEX(email, '@', -1)) AS email_replaced,
COUNT(email) as total_duplicated
FROM user
GROUP BY email_replaced
```
Upvotes: 1 <issue_comment>username_4: Based on [Rendi](https://stackoverflow.com/users/9483563/rendi-wahyudi-muliawan) answer, have arrived query as below that would give me the count of duplicates.
```
SELECT email,count(REPLACE(SUBSTRING_INDEX(email, "@", 1), ".", "")) as counted
FROM test
group by REPLACE(SUBSTRING_INDEX(email, "@", 1), ".", "")
having counted > 5
order by counted desc
```
Upvotes: 1 <issue_comment>username_5: You could do this using an regular expression pattern to identify cheating characters.
For example, this code snippet does the job as you described it in your question. In other words, this could be extended, if you have advanced needs for cheating characters (`$addresses` is the array, which holds all the e-mail addresses to check for duplication and `$uniqueAddresses` is the array with only unique addresses):
```
$addresses = array(
'si.t.i.a.m.i.n192.168.127.12 <EMAIL>',
'<EMAIL>.m.<EMAIL>',
'si.t.i.a.m.i.n172.16.31.10 <EMAIL>',
'<EMAIL>',
'<EMAIL>.45.2<EMAIL>',
'<EMAIL>',
'<EMAIL>',
'<EMAIL>'
);
foreach ($addresses as &$address) {
$lastPos = strrpos($address, '@');
$namePart = substr($address, 0, $lastPos);
$domainPart = substr($address, $lastPos + 1);
$address = preg_replace($cheatPattern, '', $namePart) . $domainPart;
}
$uniqueAddresses = array_unique($addresses);
```
Upvotes: 0
|
2018/03/16
| 422 | 1,505 |
<issue_start>username_0: I have a project that requires running multiple services each on a different folder and a db server.
How can I automate running all of them each in it's own terminal?
entering each folder and running "npm start" on separate terminal window.
thanks.<issue_comment>username_1: You can use this:
<https://www.npmjs.com/package/concurrently>
>
> concurrently "command1 arg" "command2 arg"
>
>
>
Upvotes: 0 <issue_comment>username_2: **In development:**
Take a look at `tmux` or `screen`. They provide an easy way to run multiple shells. You can even automate the startup ([How to write a shell script that starts tmux session, and then runs a ruby script](https://stackoverflow.com/questions/31902929/how-to-write-a-shell-script-that-starts-tmux-session-and-then-runs-a-ruby-scrip)), and there are tools to configure it easily, eg. <https://github.com/remiprev/teamocil>
**In production:**
You probably don't wan't production apps/services running in a detached terminal. There are more suitable tools for that with proper logging et cetera. I have a good experience with:
* [forever](https://www.npmjs.com/package/forever), for running Node apps
* [supervisord](http://supervisord.org/), for running… anything, really
Upvotes: 0 <issue_comment>username_3: You can run like this
```
npm start & mongod & node public/app.js &
```
& makes the process run in the background so once the session is closed the server stops
But you can use nohup to keep it running
Upvotes: 1
|
2018/03/16
| 875 | 2,580 |
<issue_start>username_0: I started learning C++ a few days ago.
I want to compile this example program in order to embed Python in C ++.
The C++ program is:
```
#include
#include
#define pi 3.141592653589793
using namespace std;
int main () {
//Inicio el interprete Python e imprimo informacion relevante
Py\_Initialize();
PyObject \*FileScript;
FileScript = PyFile\_FromString("script.py","r");
PyRun\_SimpleFile(PyFile\_AsFile(FileScript),"r");
PyObject \*retorno, \*modulo, \*clase, \*metodo, \*argumentos, \*objeto;
int \*resultado;
modulo = PyImport\_ImportModule("script");
clase = PyObject\_GetAttrString(modulo, "Numeros");
argumentos = Py\_BuildValue("ii",5,11);
objeto = PyEval\_CallObject(clase, argumentos);
metodo = PyObject\_GetAttrString(objeto, "suma");
argumentos = Py\_BuildValue("()");
retorno = PyEval\_CallObject(metodo,argumentos);
PyArg\_Parse(retorno, "i", &resultado);
cout<<"Result is: "<>terminar;
return 1;
}
```
and python script "script.py" is:
```
class Numeros:
def __init__(self, num1, num2):
self.num1=num1
self.num2=num2
def suma(self):
print self.num1, self.num2
return self.num1+self.num2
```
I´m using Ubuntu with G++ installed. I type this to compile:
```
g++ -I/usr/include/python2.7 -lpython2.7 main.cpp -o main
```
But I get this error:
```
main.cpp:27:42: error: cast from ‘int*’ to ‘int’ loses precision [-fpermissive]
cout<<"Result is: "<
```
How can I solve it? Tank you very much!<issue_comment>username_1: Size of pointer may be greater than size of int and depends of memory model.
```
int iSomeValue = 0;
std::cout << "size of int = " << sizeof(iSomeValue)<< " | size of pointer = " << sizeof(&iSomeValue);
```
For Visual Studio 2013 Win32 output is:
size of int = 4 | size of pointer = 4
For Visual Studio 2013 x64 output is:
size of int = 4 | size of pointer = 8
Upvotes: 1 <issue_comment>username_2: For a 64bit machine, it's tricky to cast an int pointer(hex format) to int value or char pointer to int value because each block of memory for an int variable is 32 bit and for char it's 8 bit. For example:
```
1. char SomeString[] = "Hello!";
2. char *pString = SomeString;
3. cout << "pString = " << pString << endl;
4. char *pLocation3 = &SomeString[3];
5. cout << "pLocation3 = " << (int)pLocation3 << endl;
```
You will get an error: cast from ‘char\*’ to ‘int’ loses precision [-fpermissive]
To solve this in line 5 instead of **(int)pLocation3** we have to use **(int64\_t)pLocation3** as a 64bit machine compitable.
Upvotes: 0
|
2018/03/16
| 1,135 | 3,868 |
<issue_start>username_0: I'm trying to allow users to manipulate a list in Python.
```
number_of_commands = int(input())
x = 0
my_list = []
while x <= number_of_commands:
command, i, e = input().split(' ')
command = str(command)
i = int(i)
e = int(e)
x = x + 1
if command == 'insert':
my_list.insert(i, e)
elif command == 'print':
print(my_list)
elif command == 'remove':
my_list.remove(e)
elif command == 'append':
my_list.append(e)
elif command == 'sort':
my_list.sort()
elif command == 'pop':
my_list.pop()
elif command == 'reverse':
my_list.reverse()
else:
print("goodbye")
```
When users enter a command which requires two integers (such as `insert`), the program works, but when users enter something like `print` I get the error "not enough values to unpack". It only works if you input it as print 0 0. How could I allow users to enter commands with integers and without integers?<issue_comment>username_1: Here is where the unpacking is happening:
```
command, i, e = input().split(' ')
```
entering "print" only won't allow this line to execute properly, as no values for i and e were provided.
So just read the input, then split it and check how many arguments did the user supply:
```
input_str = input()
input_str_split = input_str.split(' ')
if len(input_str_split) == 3:
command, i, e = input_str_split
i = int(i)
e = int(e)
else:
command = input_str_split
```
Upvotes: 0 <issue_comment>username_2: You can use
```
command, *values = input().split(' ')
```
values is a list. For example the 'insert' part becomes:
```
if command == 'insert':
my_list.insert(int(values[0]), int(values[1]))
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: ```
def my_function(command='print', i=0, e=0):
# your function code here
user_input = input().split(' ')
if len(user_input) > 3:
print('usage : function i e')
else:
my_function(*user_input)
```
Using `*` before the list converts the list as arguments for your function.
Using a function is a nice way to have default values in case they aren't defined.
Upvotes: 1 <issue_comment>username_4: This error occurs since you're always expecting a command to have 3 inputs:
```
command, i, e = input().split(' ')
```
This is what happens when you use just print:
```
>>> "print".split(' ')
['print']
```
So the output of `input.split()` is a list with only one element. However:
```
command, i, e = input().split(' ')
```
is expecting 3 elements: `command`, `i` and `e`.
Other answers already showed you how to solve modifying your code, but it can get pretty clunky as you add more commands. You can use Python's native REPL and create your own prompt. ([Original post where I read about the cmd module](https://coderwall.com/p/w78iva/give-your-python-program-a-shell-with-the-cmd-module))
```
from cmd import Cmd
class MyPrompt(Cmd):
my_list = []
def do_insert(self, *args):
"""Inserts element in List"""
self.my_list.append(*args)
def do_print(self, *args):
print(self.my_list)
def do_quit(self, args):
"""Quits the program."""
raise SystemExit
if __name__ == '__main__':
prompt = MyPrompt()
prompt.prompt = '> '
prompt.cmdloop('Starting prompt...')
```
Example:
```
$ python main.py
Starting prompt...
> insert 2
> insert 3
> insert 4
> print
['2', '3', '4']
>
```
cmd also lets you document the code, since I didn't make a docstring for `print` this is what gets shown once I type `help` in the terminal:
```
> help
Documented commands (type help ):
========================================
help insert quit
Undocumented commands:
======================
print
```
I leave adding the other commands and an fun exercise to you. :)
Upvotes: 0
|
2018/03/16
| 1,155 | 3,751 |
<issue_start>username_0: I'm traslating from Excel to R in order to achieve better result.
So actually i got a data.frame like this:
```
A B C D E F G
0 0 0 0 0 0 0
2 0 0 0 0 0 0
2 0 0 2 0 0 1
1 0 0 2 0 1 0
```
So [A:G] are the name of the columns that can just contain 0, 1 or 2 as number.
What i would like to do is plot a histogram or whatever in order to have one bar that rapresent one column, that should be divided as percentage (betweeen 0, 1 and 2), with all the column in the same graph.
[](https://i.stack.imgur.com/FMhKI.png)
From the image we can also see that on y-axis i prefer to see 0 to 100 and not the number of the rows, but again from a percentage perspective. The previous image is exactly what i need (also with the possibility to customize colors etc..) but for 7 columns.
Thanks a lot, Andrea.<issue_comment>username_1: Here is where the unpacking is happening:
```
command, i, e = input().split(' ')
```
entering "print" only won't allow this line to execute properly, as no values for i and e were provided.
So just read the input, then split it and check how many arguments did the user supply:
```
input_str = input()
input_str_split = input_str.split(' ')
if len(input_str_split) == 3:
command, i, e = input_str_split
i = int(i)
e = int(e)
else:
command = input_str_split
```
Upvotes: 0 <issue_comment>username_2: You can use
```
command, *values = input().split(' ')
```
values is a list. For example the 'insert' part becomes:
```
if command == 'insert':
my_list.insert(int(values[0]), int(values[1]))
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: ```
def my_function(command='print', i=0, e=0):
# your function code here
user_input = input().split(' ')
if len(user_input) > 3:
print('usage : function i e')
else:
my_function(*user_input)
```
Using `*` before the list converts the list as arguments for your function.
Using a function is a nice way to have default values in case they aren't defined.
Upvotes: 1 <issue_comment>username_4: This error occurs since you're always expecting a command to have 3 inputs:
```
command, i, e = input().split(' ')
```
This is what happens when you use just print:
```
>>> "print".split(' ')
['print']
```
So the output of `input.split()` is a list with only one element. However:
```
command, i, e = input().split(' ')
```
is expecting 3 elements: `command`, `i` and `e`.
Other answers already showed you how to solve modifying your code, but it can get pretty clunky as you add more commands. You can use Python's native REPL and create your own prompt. ([Original post where I read about the cmd module](https://coderwall.com/p/w78iva/give-your-python-program-a-shell-with-the-cmd-module))
```
from cmd import Cmd
class MyPrompt(Cmd):
my_list = []
def do_insert(self, *args):
"""Inserts element in List"""
self.my_list.append(*args)
def do_print(self, *args):
print(self.my_list)
def do_quit(self, args):
"""Quits the program."""
raise SystemExit
if __name__ == '__main__':
prompt = MyPrompt()
prompt.prompt = '> '
prompt.cmdloop('Starting prompt...')
```
Example:
```
$ python main.py
Starting prompt...
> insert 2
> insert 3
> insert 4
> print
['2', '3', '4']
>
```
cmd also lets you document the code, since I didn't make a docstring for `print` this is what gets shown once I type `help` in the terminal:
```
> help
Documented commands (type help ):
========================================
help insert quit
Undocumented commands:
======================
print
```
I leave adding the other commands and an fun exercise to you. :)
Upvotes: 0
|
2018/03/16
| 745 | 2,317 |
<issue_start>username_0: I am trying to implement a median filter in python using the following code
```
from PIL import Image
path = "gaussian.png" # Your image path
img = Image.open(path)
width, height = Image.size
members = [(0,0)] * 9
newimg = Image.new("RGB",(width,height),"white")
for i in range(1,width-1):
for j in range(1,height-1):
members[0] = img.getpixel((i-1,j-1))
members[1] = img.getpixel((i-1,j))
members[2] = img.getpixel((i-1,j+1))
members[3] = img.getpixel((i,j-1))
members[4] = img.getpixel((i,j))
members[5] = img.getpixel((i,j+1))
members[6] = img.getpixel((i+1,j-1))
members[7] = img.getpixel((i+1,j))
members[8] = img.getpixel((i+1,j+1))
members.sort()
newimg.putpixel((i,j),(members[4]))
```
however I keep getting an error saying NameError: name 'width' is not defined<issue_comment>username_1: Neither height or width is defined.
You could maybe get the height and width of your image by doing the following:
```
width, height = img.size
```
I hope this is useful
Upvotes: 0 <issue_comment>username_2: You need to set the image size of the image you are reading.
replace
```
width, height = Image.size
```
**with**
```
width, height = img.size
```
Upvotes: 2 <issue_comment>username_3: Please find working code here. might help
from PIL import Image
```
def main():
path = "gaussian.png" # Your image path
img = Image.open(path)
width, height = img.size
print(width, height)
members = [(0,0)] * 9
newimg = Image.new("RGB",(width,height),"white")
for i in range(1,width-1):
for j in range(1,height-1):
members[0] = img.getpixel((i-1,j-1))
members[1] = img.getpixel((i-1,j))
members[2] = img.getpixel((i-1,j+1))
members[3] = img.getpixel((i,j-1))
members[4] = img.getpixel((i,j))
members[5] = img.getpixel((i,j+1))
members[6] = img.getpixel((i+1,j-1))
members[7] = img.getpixel((i+1,j))
members[8] = img.getpixel((i+1,j+1))
members.sort()
newimg.putpixel((i,j),(members[4]))
if __name__ == '__main__':
main()
```
Upvotes: 0
|
2018/03/16
| 561 | 2,191 |
<issue_start>username_0: I have two objects
```
"Conditions1": [
{
"fieldToken": "value1",
"uniqueName": "value2",
"conditionOperator": ">",
"conditionValue": "value3"
},
{
"fieldToken": "value1",
"uniqueName": "value2",
"conditionOperator": "==",
"conditionValue": "value3"
}
]
"Conditions2": [
{
"fieldToken": "value1",
"uniqueName": "value2",
"conditionOperator": ">",
"conditionValue": "value3"
},
{
"fieldToken": "value1",
"uniqueName": "value2",
"conditionOperator": "==",
"conditionValue": "value3"
}
]
```
I want to compare these two objects whether they are equal or not. Both the objects are same but When I am using equals method it is returning false. How can i compare array of json objects?<issue_comment>username_1: As you have two objects.
take it in two variable and then make it string for both.
e.g.
```
var a = Conditions1.toString();
var b = Conditions2.toString();
if(Conditions1.toString() === Conditions2.toString()){
console.log("matched")
}
```
***Note: above two matched only when both object are same.***
Upvotes: -1 [selected_answer]<issue_comment>username_2: There is such a thing as a JSONObject if you use package org.json. You create a JSONObject:
```
import org.json.JSONObject;
JSONObject firstJson = new JSONObject();
JSONObject secondJson = new JSONObject();
firstJson.put("test1", "value1");
secondJson.put("test1", "value1"); // change value1 to something else to test
boolean result = firstJson.similar(secondJson);
System.out.println("result: " + result);
```
Change value1 or any of the values to test. It will print true or false depending on your test values. The source code says: "They must contain the same set of names which must be associated with similar values." The source can be found here: <https://github.com/stleary/JSON-java/blob/master/JSONObject.java>
To test an array of JSONObjects, use JSONArray from the same package, and break the array up into individual JSONObjects to compare them.
Upvotes: 0
|
2018/03/16
| 1,349 | 4,987 |
<issue_start>username_0: If Thread1 tried to lock resource locked by Thread2.
Does it go to sleep for finite time ?
Now if the Thread2 unlock the mutex then how would Thread1 will come to know that resource is available ? Is the operating system wakes it up or it checks for resource periodically ?<issue_comment>username_1: your second assumption is correct. When a mutex is locked by a thread already, all the remaining threads that are trying to lock it again will be placed on hold and will be in the sleep state. Once the mutex lock is unlocked the O/S wakes them all up and who can unlock first can access the lock. This is not in FIFO basis, actually there is no rule which thread should get first preference to lock the mutex once wakes up. You can consider my below example where I have use condition variable to control the threads:-
```
pthread_cond_t cond1 = PTHREAD_COND_INITIALIZER;
pthread_cond_t cond2 = PTHREAD_COND_INITIALIZER;
pthread_cond_t cond3 = PTHREAD_COND_INITIALIZER;
pthread_mutex_t lock1 = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t lock2 = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t lock3 = PTHREAD_MUTEX_INITIALIZER;
int TRUE = 1;
void print(char *p)
{
printf("%s",p);
}
void * threadMethod1(void *arg)
{
printf("In thread1\n");
do{
pthread_mutex_lock(&lock1);
pthread_cond_wait(&cond1, &lock1);
print("I am thread 1st\n");
pthread_cond_signal(&cond3);/* Now allow 3rd thread to process */
pthread_mutex_unlock(&lock1);
}while(TRUE);
pthread_exit(NULL);
}
void * threadMethod2(void *arg)
{
printf("In thread2\n");
do
{
pthread_mutex_lock(&lock2);
pthread_cond_wait(&cond2, &lock2);
print("I am thread 2nd\n");
pthread_cond_signal(&cond1);
pthread_mutex_unlock(&lock2);
}while(TRUE);
pthread_exit(NULL);
}
void * threadMethod3(void *arg)
{
printf("In thread3\n");
do
{
pthread_mutex_lock(&lock3);
pthread_cond_wait(&cond3, &lock3);
print("I am thread 3rd\n");
pthread_cond_signal(&cond2);
pthread_mutex_unlock(&lock3);
}while(TRUE);
pthread_exit(NULL);
}
int main(void)
{
pthread_t tid1, tid2, tid3;
int i = 0;
printf("Before creating the threads\n");
if( pthread_create(&tid1, NULL, threadMethod1, NULL) != 0 )
printf("Failed to create thread1\n");
if( pthread_create(&tid2, NULL, threadMethod2, NULL) != 0 )
printf("Failed to create thread2\n");
if( pthread_create(&tid3, NULL, threadMethod3, NULL) != 0 )
printf("Failed to create thread3\n");
pthread_cond_signal(&cond1);/* Now allow first thread to process first */
sleep(1);
TRUE = 0;/* Stop all the thread */
sleep(3);
/* this is how we join thread before exit from a system */
/*
pthread_join(tid1,NULL);
pthread_join(tid2,NULL);
pthread_join(tid3,NULL);*/
exit(0);
}
```
Here I am using 3 mutexs and 3 conditions. With the above example you can schedule/control or prioritize any number of threads in C. If you see the first thread here it locked mutex lock1 and waiting on cond1, likewise second thread locked mutex lock2 and waits on condition cond2 and 3rd thread locked mutex lock3 and waits on condition cond3. This is the current situation of all the threads after they are being created and now all the threads are waiting for a signal to execute further on its condition variable. In the main thread (i.e. main function, every program has one main thread, in C/C++ this main thread created automatically by operating system once control pass to the main method by kernal) we are calling pthread\_cond\_signal(&cond1); once this system call done thread1 who was waiting on cond1 will be release and it will start executing. Once it finished with its task it will call pthread\_cond\_signal(&cond3); now thread who was waiting on condition cond3 i.e. thread3 will be release and it will start execute and will call pthread\_cond\_signal(&cond2); which will release the thread who is waiting on condition cond2 i.e. in this case thread2.
Upvotes: 2 <issue_comment>username_2: **Fundamental information about the mutex (MUtual Exclusion locks)**
A mutex is a special lock that only one thread may lock at a time. If a thread locks a mutex and then a second thread also tries to lock the same mutex, the second thread is blocked, or put on hold. Only when the first thread unlocks the mutex is the second thread unblocked—allowed to resume execution.
* Linux guarantees that race conditions do not occur among threads attempting to lock a mutex; only one thread will ever get the lock, and all other threads will be blocked.
* A thread may attempt to lock a mutex by calling pthread\_mutex\_lock on it. If the mutex was unlocked, it becomes locked and the function returns immediately.
---
**What happens trying to lock the when its locked by another thread?**
If the mutex was locked by another thread, `pthread_mutex_lock` blocks execution and returns only eventually when the mutex is unlocked by the other thread.
Upvotes: 1
|
2018/03/16
| 1,985 | 7,422 |
<issue_start>username_0: So i am running the same query on phpmyadmin and in my php code.
The query is :
```
SELECT comments.roomID,comments.message, comments.dateTimeSent, sender.fname ,sender.lname,receiver.fname, receiver.lname
FROM comments
INNER JOIN users as sender ON comments.senderID = sender.id
INNER JOIN users as receiver ON comments.receiverID = receiver.id
INNER JOIN chatRooms ON comments.roomID = chatRooms.id WHERE comments.roomID = 8;
```
when i run this directly from phpmyadmin panel the result i get is this
[](https://i.stack.imgur.com/cZsw3.png)
But when i run this in my php code i get this as the result:
```
Array
(
[roomID] => 8
[message] => Hello from mysql database
[dateTimeSent] => 2018-03-16 11:04:03
[id] => 23
[fname] => pavlos
[lname] => elpidorou
)
Array
(
[roomID] => 8
[message] => asdasd;asda
[dateTimeSent] => 2018-03-16 11:21:30
[id] => 25
[fname] => Antreas
[lname] => antoniou
)
```
the `sender.fname`, `sender.lname`, `receiver.fname`,`receiver.lname` are missing from the array
the code i use to execute the query and get the results is as follows
```
foreach ($chatRoomArray as &$room) {
$roomID = $room['id'];
$query = "SELECT comments.roomID,comments.message, comments.dateTimeSent,sender.id, sender.fname ,sender.lname,receiver.fname, receiver.lname
FROM comments
INNER JOIN users as sender ON comments.senderID = sender.id
INNER JOIN users as receiver ON comments.receiverID = receiver.id
INNER JOIN chatRooms ON comments.roomID = chatRooms.id WHERE comments.roomID = ".$roomID;
$stmt = $this->conn->prepare($query);
$result = $stmt->execute();
$commentArray = array();
if ($result) {
$num = $stmt->rowCount();
if ($num > 0) {
while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
extract($row);
print_r($row);
$comment = array(
"commentID" => $row['id'],
"message"=>$row['message'],
"dateTimeSent"=>$row['dateTimeSent'],
"senderFname"=>$row['sender.fname'],
"senderLname" => $row['sender.lname'],
"receiverFname" => $row['receiver.fname'],
"receiverLname" => $row['receiver.lname']
);
}
}
}
}
```<issue_comment>username_1: your second assumption is correct. When a mutex is locked by a thread already, all the remaining threads that are trying to lock it again will be placed on hold and will be in the sleep state. Once the mutex lock is unlocked the O/S wakes them all up and who can unlock first can access the lock. This is not in FIFO basis, actually there is no rule which thread should get first preference to lock the mutex once wakes up. You can consider my below example where I have use condition variable to control the threads:-
```
pthread_cond_t cond1 = PTHREAD_COND_INITIALIZER;
pthread_cond_t cond2 = PTHREAD_COND_INITIALIZER;
pthread_cond_t cond3 = PTHREAD_COND_INITIALIZER;
pthread_mutex_t lock1 = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t lock2 = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t lock3 = PTHREAD_MUTEX_INITIALIZER;
int TRUE = 1;
void print(char *p)
{
printf("%s",p);
}
void * threadMethod1(void *arg)
{
printf("In thread1\n");
do{
pthread_mutex_lock(&lock1);
pthread_cond_wait(&cond1, &lock1);
print("I am thread 1st\n");
pthread_cond_signal(&cond3);/* Now allow 3rd thread to process */
pthread_mutex_unlock(&lock1);
}while(TRUE);
pthread_exit(NULL);
}
void * threadMethod2(void *arg)
{
printf("In thread2\n");
do
{
pthread_mutex_lock(&lock2);
pthread_cond_wait(&cond2, &lock2);
print("I am thread 2nd\n");
pthread_cond_signal(&cond1);
pthread_mutex_unlock(&lock2);
}while(TRUE);
pthread_exit(NULL);
}
void * threadMethod3(void *arg)
{
printf("In thread3\n");
do
{
pthread_mutex_lock(&lock3);
pthread_cond_wait(&cond3, &lock3);
print("I am thread 3rd\n");
pthread_cond_signal(&cond2);
pthread_mutex_unlock(&lock3);
}while(TRUE);
pthread_exit(NULL);
}
int main(void)
{
pthread_t tid1, tid2, tid3;
int i = 0;
printf("Before creating the threads\n");
if( pthread_create(&tid1, NULL, threadMethod1, NULL) != 0 )
printf("Failed to create thread1\n");
if( pthread_create(&tid2, NULL, threadMethod2, NULL) != 0 )
printf("Failed to create thread2\n");
if( pthread_create(&tid3, NULL, threadMethod3, NULL) != 0 )
printf("Failed to create thread3\n");
pthread_cond_signal(&cond1);/* Now allow first thread to process first */
sleep(1);
TRUE = 0;/* Stop all the thread */
sleep(3);
/* this is how we join thread before exit from a system */
/*
pthread_join(tid1,NULL);
pthread_join(tid2,NULL);
pthread_join(tid3,NULL);*/
exit(0);
}
```
Here I am using 3 mutexs and 3 conditions. With the above example you can schedule/control or prioritize any number of threads in C. If you see the first thread here it locked mutex lock1 and waiting on cond1, likewise second thread locked mutex lock2 and waits on condition cond2 and 3rd thread locked mutex lock3 and waits on condition cond3. This is the current situation of all the threads after they are being created and now all the threads are waiting for a signal to execute further on its condition variable. In the main thread (i.e. main function, every program has one main thread, in C/C++ this main thread created automatically by operating system once control pass to the main method by kernal) we are calling pthread\_cond\_signal(&cond1); once this system call done thread1 who was waiting on cond1 will be release and it will start executing. Once it finished with its task it will call pthread\_cond\_signal(&cond3); now thread who was waiting on condition cond3 i.e. thread3 will be release and it will start execute and will call pthread\_cond\_signal(&cond2); which will release the thread who is waiting on condition cond2 i.e. in this case thread2.
Upvotes: 2 <issue_comment>username_2: **Fundamental information about the mutex (MUtual Exclusion locks)**
A mutex is a special lock that only one thread may lock at a time. If a thread locks a mutex and then a second thread also tries to lock the same mutex, the second thread is blocked, or put on hold. Only when the first thread unlocks the mutex is the second thread unblocked—allowed to resume execution.
* Linux guarantees that race conditions do not occur among threads attempting to lock a mutex; only one thread will ever get the lock, and all other threads will be blocked.
* A thread may attempt to lock a mutex by calling pthread\_mutex\_lock on it. If the mutex was unlocked, it becomes locked and the function returns immediately.
---
**What happens trying to lock the when its locked by another thread?**
If the mutex was locked by another thread, `pthread_mutex_lock` blocks execution and returns only eventually when the mutex is unlocked by the other thread.
Upvotes: 1
|
2018/03/16
| 602 | 1,950 |
<issue_start>username_0: I'm using the following example to check if an item is in a list:
```
var = 'a'
var_list = ('a','b','c')
if var in var_list:
do_something()
```
But in my case, what I have is a dictionary and a list of dictionaries:
```
var = {'name': 'John', 'age': 35, 'city': 'Orlando'}
var_list = ( {'name': 'John', 'age': 36, 'city': 'Orlando'} , \
{'name': 'Alex', 'age': 22, 'city': 'New York'} , \
{'name': 'Celes', 'age': 24, 'city': 'Vector'} )
if var['name'] in var_list:
do_something()
```
I need to only check the key 'name' in the comparasion, otherwise if I do `var in var_list` the key `age` will be different and will result in not getting inside the if condition. Is it possible to compare only the `name` key?
Of course, I can iterate and check item by item, but if there is a function or something that will reduce the execution time, it will be great.<issue_comment>username_1: You can use a list comprehension:
```
if var['name'] in [d['name'] for d in var_list]:
doSomething()
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: One way is to test versus a set of names from your list of dictionaries.
```
from operator import itemgetter as iget
name_set = set(map(iget('name'), var_list))
if var['name'] in name_set:
do_something()
```
Upvotes: 2 <issue_comment>username_3: You can use builtin function [`any`](https://docs.python.org/3/library/functions.html#any) to check if there is a match. It will return boolean `True` or `False` based on whether the value is matched or not.
```
>>> from operator import itemgetter
>>> any(var['name'] == k for k in map(itemgetter('name'),var_list))
>>> True
```
However if you are interested in actual matched element you can do it with following generator expression:
```
>>> next((item for item in var_list if item["name"] == var["name"]),None)
>>> {'name': 'John', 'age': 36, 'city': 'Orlando'}
```
Upvotes: 1
|
2018/03/16
| 994 | 3,494 |
<issue_start>username_0: I am doing a research work on HTML5 Drag and Drop concept by referring this [link](https://www.html5rocks.com/en/tutorials/dnd/basics/). Currently, I am facing a problem on `dragenter` event which is firing for the second time (for child element) before `dragleave`. Therefore, the dashed border style which I have given when `dragenter` is not removing after `dragleave` in some cases. When I searched in Google, I referred some links and this [link](https://stackoverflow.com/a/10310815/196921) but still couldn't fix the issue. I added CSS property `pointer-events: none` but that is not compatible in IE9 and IE10.
Please note I need compatibility on IE9 and above, Mozilla too.
PFB the code snippet.
```js
var dragSrcEl = null;
var dragEnteredSrcEl = null;
var collection = $();
function handleDragStart(ev) {
dragSrcEl = ev.target;
ev.target.style.opacity = 0.4;
ev.target.classList.add('moving');
ev.dataTransfer.setData('text/html', ev.target.innerHTML);
}
function handleDragOver(ev) {
ev.preventDefault();
}
function handleDragEnter(ev, el) {
ev.stopPropagation();
console.log('drag enter!');
var dragEnteringElement = ev.target;
collection = collection.add(dragEnteringElement);
// dragEnteredSrcEl = dragEnteringElement;
el.classList.add('over');
}
function handleDrop(ev, el) {
// console.log(ev.target);
var droppingElement = el;
// Don't do anything if dropping the same column we're dragging.
if (dragSrcEl != ev.target) {
// Set the source column's HTML to the HTML of the column we dropped on.
dragSrcEl.style.opacity = 1;
dragSrcEl.innerHTML = droppingElement.innerHTML;
droppingElement.classList.remove('over');
droppingElement.innerHTML = ev.dataTransfer.getData('text/html');
}
}
function handleDragLeave(ev, el) {
// console.log(ev.target);
setTimeout(function() {
var dragLeavingElement = ev.target;
console.log(collection.length);
collection = collection.not(dragLeavingElement);
if (collection.length === 0) {
console.log('drag leave!');
el.classList.remove('over');
}
}, 1);
}
function handleDragEnd(ev) {
ev.target.style.opacity = 1;
ev.target.classList.remove('moving');
ev.target.classList.remove('over');
}
function handleContentClick(content) {
alert(content);
}
```
```css
.clearfix {
clear: both;
}
[draggable] {
cursor: move;
}
.col-md-4 {
width: 150px;
height: 150px;
border: 2px solid orange;
float: left;
margin-top: 5px;
margin-right: 5px;
text-align: center;
}
.col-md-4.over {
border-style: dashed;
}
/* .col-md-4 h2 {
pointer-events: none;
} */
```
```html
Hello!
------
Welcome!
--------
World!
------
Big Hello!
----------
Big Welcome!
------------
Big World!
----------
```
Thanks in advance.<issue_comment>username_1: You should use `event.stopPropagation();` who prevents the event from beeing trigger with the children elements.
There is the doc : <https://api.jquery.com/event.stoppropagation/>
Upvotes: 2 <issue_comment>username_2: Its a Firefox-Bug. In Firefox the "Dragenter-Event" gets fired twice. No matter if there is any child-element or not.
You could try to work with event.target and event.currentTarget in combination with pointer-events: none.
Upvotes: 0
|
2018/03/16
| 187 | 733 |
<issue_start>username_0: Is there a way in Laravel to do some check on associating?
For example I've `Home` and `Owner`, I would like on `associate` check if `Home` as already an `Owner`, and in that case I must execute some code...
Some suggestions?<issue_comment>username_1: You should use `event.stopPropagation();` who prevents the event from beeing trigger with the children elements.
There is the doc : <https://api.jquery.com/event.stoppropagation/>
Upvotes: 2 <issue_comment>username_2: Its a Firefox-Bug. In Firefox the "Dragenter-Event" gets fired twice. No matter if there is any child-element or not.
You could try to work with event.target and event.currentTarget in combination with pointer-events: none.
Upvotes: 0
|
2018/03/16
| 832 | 3,019 |
<issue_start>username_0: I have a controller with:
```
if($_POST) {
$this->load->helper(array('form', 'url'));
$this->load->library('form_validation');
$val = $this->form_validation;
$val->set_rules('content[title]', 'Title', 'trim|required');
$val->set_rules('content[subtitle]', 'Subtitle', 'trim');
$val->set_rules('content[description]', 'description', 'trim');
if ($val->run() AND $this->db->insert('content', $content)) {
// query done
}
}
```
When I post form I am getting this error:
>
> Fatal error: Call to undefined method stdClass::load() in **...\libraries\Form\_validation.php** on line 450
>
>
>
Line of 450 of `Form_validation.php`
```
// Load the language file containing error messages
$this->CI->lang->load('form_validation');
```
Please help me fix this....<issue_comment>username_1: replace your `input names` with title, subtitle, description and try this code
```
$this->load->helper(array('form', 'url'));
$this->load->library('form_validation');
if ($this->input->post()) {
$this->form_validation->set_rules('title', 'Title', 'trim|required');
$this->form_validation->set_rules('subtitle', 'Subtitle', 'trim');
$this->form_validation->set_rules('title', 'Description', 'required');
$data['content'] = array(
'db_field_name' => $this->input->post('title'),
'db_field_name' => $this->input->post('subtitle'),
'db_field_name' => $this->input->post('description'),
);
if ($this->form_validation->run()){
$this->db->insert('content', $data['content']);
}
}
```
Upvotes: 1 <issue_comment>username_2: If you are using form validation and all for multiple instance you can define the helper and library in the config/autoload.php. Also refer the input\_field name for validation rule.
```
$this->load->helper(array('form', 'url'));
$this->load->library('form_validation');
if ($this->input->post()) {
$this->form_validation->set_rules('title', 'Title', 'trim|required');
$this->form_validation->set_rules('subtitle', 'Subtitle', 'trim');
$this->form_validation->set_rules('title', 'Description', 'required');
if ($this->form_validation->run() == TRUE) {
//Your DB operation
$this->data['formdata'] = array(
'db_field_title' => $this->input->post('title'),
'db_field_subtitle' => $this->input->post('subtitle'),
'db_field_description' => $this->input->post('description')
);
$this->db->insert('content', $this->data['content']);
} else {
//Show error message
echo validation_errors();
}
}
```
Upvotes: 0 <issue_comment>username_3: In your case, i think there is missing of language file
You have to check that in **system/language/english folder** there is form\_validation\_lang.php is available or not. there is missing of form\_Validation\_lang.php you have to put that file
Upvotes: 0
|
2018/03/16
| 460 | 1,203 |
<issue_start>username_0: I want to call a REST api and get some json data in response in python.
```
curl https://analysis.lastline.com/analysis/get_completed -X POST -F “key=<KEY>” -F “api_token=<KEY>” -F “after=2016-03-11 20:00:00”
```
I know of python [request](http://docs.python-requests.org/en/latest/), but how can I pass `key`, `api_token` and `after`? What is `-F` flag and how to use it in python requests?<issue_comment>username_1: Just include the parameter `data` to the .post function.
```
requests.post('https://analysis.lastline.com/analysis/get_completed', data = {'key':'<KEY>', 'api_token':'<KEY>', 'after':'2016-03-11 20:00:00'})
```
Upvotes: 2 <issue_comment>username_2: -F means make a POST as form data.
So in requests it would be:
```
>>> r = requests.post('http://httpbin.org/post', data = {'key':'value'})
```
Upvotes: 0 <issue_comment>username_3: -F stands for form contents
```
import requests
data = {
'key': '<KEY>',
'api_token': '<KEY>',
'after': '2016-03-11',
}
response = requests.post('https://analysis.lastline.com/analysis/get_completed', data=data)
```
Upvotes: 2
|
2018/03/16
| 753 | 1,931 |
<issue_start>username_0: I have a code that works properly. He prints "ok".
```
data = "on482654225954"
if data:find("on.") then
start, stop = data:find("on.")
local a = 0
for i=stop,stop+11 do
if data:sub(stop+a,stop+a):match("[0-9]") then
t = { [a] = data:sub(stop+a,stop+a) }
a = a + 1
if t[0] == "4" then
print("ok")
end
end
end
end
```
The code below does not work properly. It does not print "ok".
```
data = "on482654225954"
if data:find("on.") then
start, stop = data:find("on.")
local a = 0
for i=stop,stop+11 do
if data:sub(stop+a,stop+a):match("[0-9]") then
t = { [a] = data:sub(stop+a,stop+a) }
a = a + 1
if t[0] == "4" and t[5] == "4" and t[11] == "4" then
print("ok")
end
end
end
end
```
How do you get the above code to work properly?
**EDIT:**
Program output.
```
Program 'lua.exe' started in 'C:\Users\pic.pic-Komputer\Downloads\ZeroBraneStudio\myprograms' (pid: 2628).
0 4
1 8
2 2
3 6
4 5
5 4
6 2
7 2
8 5
9 9
10 5
11 4
Program completed in 0.06 seconds (pid: 2628).
```<issue_comment>username_1: Just include the parameter `data` to the .post function.
```
requests.post('https://analysis.lastline.com/analysis/get_completed', data = {'key':'<KEY>', 'api_token':'<PASSWORD>', 'after':'2016-03-11 20:00:00'})
```
Upvotes: 2 <issue_comment>username_2: -F means make a POST as form data.
So in requests it would be:
```
>>> r = requests.post('http://httpbin.org/post', data = {'key':'value'})
```
Upvotes: 0 <issue_comment>username_3: -F stands for form contents
```
import requests
data = {
'key': '<KEY>',
'api_token': '<PASSWORD>',
'after': '2016-03-11',
}
response = requests.post('https://analysis.lastline.com/analysis/get_completed', data=data)
```
Upvotes: 2
|
2018/03/16
| 412 | 1,146 |
<issue_start>username_0: I wish to execute a function every time that a div tag with specific class is added in html page.
For example :
I add dynamically :
```
```
and function go\_form() is executed.
I try a code :
```
$(document).on("load", ".myform", function(){
var div_id = $(this).attr("id")
go_form(div_id)
});
```
but it don't work.
Thanks for your help !!!<issue_comment>username_1: Just include the parameter `data` to the .post function.
```
requests.post('https://analysis.lastline.com/analysis/get_completed', data = {'key':'<KEY>', 'api_token':'<KEY>', 'after':'2016-03-11 20:00:00'})
```
Upvotes: 2 <issue_comment>username_2: -F means make a POST as form data.
So in requests it would be:
```
>>> r = requests.post('http://httpbin.org/post', data = {'key':'value'})
```
Upvotes: 0 <issue_comment>username_3: -F stands for form contents
```
import requests
data = {
'key': '<KEY>',
'api_token': '<KEY>',
'after': '2016-03-11',
}
response = requests.post('https://analysis.lastline.com/analysis/get_completed', data=data)
```
Upvotes: 2
|
2018/03/16
| 1,468 | 4,927 |
<issue_start>username_0: I'm starting my journey with Vue.js, and stumbled upon some problem.
I wanted to create a simple sidebar with dropdown menu, and already got this:
```js
new Vue({
el: '#app',
data() {
return {
openedItems: {},
selectedSub: '',
userMenu: false,
items: [{
icon: './src/assets/img/icons/dashboard.svg',
text: 'Element 1',
path: '#1'
},
{
icon: './src/assets/img/icons/orders.svg',
text: 'Element 2',
path: '#2'
},
{
icon: './src/assets/img/icons/products.svg',
text: 'NestedElement',
path: '',
children: [{
icon: 'now-ui-icons files_paper',
text: 'Nested 1 ',
path: '/products'
},
{
icon: 'now-ui-icons files_paper',
text: 'Nested 2',
path: '/categories'
},
{
icon: 'now-ui-icons location_bookmark',
text: 'Nested 3',
path: '/attribute-sets'
},
{
icon: 'now-ui-icons files_box',
text: 'Nested 4',
path: '/variant-groups'
},
{
icon: 'now-ui-icons shopping_box',
text: 'Nested 5',
path: '/vendors'
},
{
icon: 'now-ui-icons business_chart-bar-32',
text: 'Nested 6',
path: '/vat-rates'
},
],
},
{
icon: './src/assets/img/icons/clients.svg',
text: 'Element 4',
path: '#3'
},
{
icon: './src/assets/img/icons/marketing.svg',
text: 'Element 5',
path: '#4'
},
{
icon: './src/assets/img/icons/reports.svg',
text: 'Element 6',
path: '#5'
},
{
icon: './src/assets/img/icons/settings.svg',
text: 'Element 7',
path: '#6'
},
{
icon: './src/assets/img/icons/integrations.svg',
text: 'Element 8',
path: '#7'
},
],
}
},
methods: {
collapseItem(index, item) {
if (item.children != null) {
this.openedItems[index] = !this.openedItems[index]
this.$forceUpdate()
}
}
}
})
```
```css
body {
background-color: #F6F7FB;
font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
}
a {
color: white;
text-decoration: none;
}
ul li {
list-style-type: none;
}
#sidebar {
background: linear-gradient(#2595ec, #64DD17);
top: 0;
left: 0;
width: 275px;
height: 1000px;
position: absolute;
z-index: -1;
color: #FFFFFF;
font-size: 14px;
display: grid;
grid-template-columns: 45px 155px 30px 45px;
grid-template-areas: ". items . ";
font-weight: bold;
}
.sidebar-items {
padding-top: 100px;
grid-area: items;
margin-left: -40px;
display: flex;
flex-direction: column;
}
.item {
display: flex;
flex-direction: row;
flex-wrap: nowrap;
padding-bottom: 10px;
}
#item-icon {
width: 25px;
height: auto;
filter: invert(100%);
padding-top: 9px;
padding-right: 15px;
grid-area: item-icon;
float: left;
}
.item-name {
grid-area: item-name;
position: static;
float: left;
}
.child-items {
padding-top: 50px;
font-size: 12px;
}
.child-item {
padding-bottom: 15px;
white-space: nowrap
}
.slide-fade-enter-active {
transition: all .8s ease;
}
.slide-fade-leave-active {
transition: all .3s cubic-bezier(0.5, 1.0, 0.8, 1.0);
}
.slide-fade-enter,
.slide-fade-leave-to {
transform: translateY(-10px);
opacity: 0;
}
```
```html
* {{item.text}}
+ {{child.text}}
```
My problem here is that when I click on a nested elements child, the element closes and it's supposed to stay open. Any ideas how I could achieve that? I'm pretty sure I would have to change my approach on this, but at this point nothing comes to my mind.
Here's jsfiddle also: <https://jsfiddle.net/myeh0mvo/14/><issue_comment>username_1: You need to add a `@click.stop` on your nested elements to stop the event propagation:
```
{{child.text}}
```
Here is the updated JSFiddle: <https://jsfiddle.net/myeh0mvo/23/>
You can learn more about event modifiers on this page of the documentation: <https://v2.vuejs.org/v2/guide/events.html#Event-Modifiers>
Upvotes: 4 [selected_answer]<issue_comment>username_2: I was looking for exactly this, so thank you.
However, I found that for me, I needed to add `@click.stop.prevent` to the nested elements.
Not sure why, but it was a random try and it keeps the menu open when the child element is clicked.
Upvotes: 0
|
2018/03/16
| 472 | 1,783 |
<issue_start>username_0: this is so annoying.
Yesterday, everything worked fine. I commited my work on my app and went to bed.
Today- nothing works.
I try to debug my program, the app installs on my phone and works but the debugging stops only with: Unable to start program "The system cannot find the file specified".
No matter how ofter I rebuild, set my project as start project, change the target api, or what not. I even updated everything on my computer but it is hopeless.
Please, help me :(<issue_comment>username_1: [This](https://forums.xamarin.com/discussion/124494/suddenly-cant-debug-and-up-against-a-deadline) looks like a promising solution for this issue but somehow it didnt work for me, so I tried to repair my Visual Studio 17 and it worked.
For repairing visual studio you have to download visual studio installer from [official microsoft website](https://www.visualstudio.com/downloads/)
If VS is already installed in your machine you'll get a screen with options :- Update, Launch and more.
select more and click repair.
You have to wait until VS finishes repairing. Hope you'll get rid of this annoying issue.
Upvotes: 2 [selected_answer]<issue_comment>username_2: I had a similar issue, and solved it like this:
*As Luminous\_Dev figured out here:
forums.xamarin.com/discussion/123640/xamlctask-error-on-new-cross-plat-solution
make sure Mono Debugging for Visual Studio is enabled.*
Tools > Extensions and Updates> Mono Debugging for Visual Studio
Source: <https://forums.xamarin.com/discussion/123611/unable-to-start-debugging-the-system-cannot-find-the-file-specified>
Upvotes: 0 <issue_comment>username_3: make sure **Mono Debugging** for Visual Studio is enabled.
T**ools > Extensions and Updates> Mono Debugging for Visual Studio**
Upvotes: 0
|
2018/03/16
| 447 | 1,603 |
<issue_start>username_0: Hi Friends am in need your all help that i got an scenario that navigating from one html page to another page
Clear explanation
-----------------
navigating from html page 1 to html page2 which contains three buttons with different id. i have to make the first button to be selected on page load and have to get all its argument value onclick event.
html page 1
-----------
```
[click here](htmlpage2.html)
```
html page 2
-----------
```
click
click
click
function save(name,shape,made){
alert(name);
alert(shape);
alert(made);
}
## scenario ##
On navigating to html page2 i have to make the button with id="button1" to be clicked default and have to get alert(); as if give in the save function hope will get a better response thank You
```<issue_comment>username_1: ```
function save(name,shape,made){
alert(name);
alert(shape);
alert(made);
}
$("#button1").trigger("click")
```
Upvotes: 1 <issue_comment>username_2: You can try this :
```
document.getElementById('button1').click();
```
Upvotes: 0 <issue_comment>username_3: You can use this code.
```
click
click
click
function save(name,shape,made){
alert(name);
alert(shape);
alert(made);
}
document.getElementById("button1").click();
```
Upvotes: 1 <issue_comment>username_4: ```
click
click
click
function save(name, shape, made) {
alert(name);
alert(shape);
alert(made);
}
document.getElementById("button1").click();
```
Upvotes: 0 <issue_comment>username_5: If not using jquery,
```
document.getElementById('button1').click();
```
Upvotes: 0
|
2018/03/16
| 220 | 802 |
<issue_start>username_0: I am trying to migrate mongodb data to Cosmos db. The migration tool I am using, is asking for "AccountKey" required to connect the Cosmosdb.
I am not able to get the account key in azure portal.
<https://portal.azure.com><issue_comment>username_1: You can find the **AccountKey** from **`Settings -> ConnectionString -> Primary Password`** when you click on your cosmosDB resource
[](https://i.stack.imgur.com/EIgHW.jpg)
Upvotes: 4 [selected_answer]<issue_comment>username_2: Navigate to `Settings -> Keys` to view Primary Key as well as Connection String for CosmosDB in Azure Portal
[](https://i.stack.imgur.com/gWbVc.png)
Upvotes: 2
|
2018/03/16
| 360 | 1,351 |
<issue_start>username_0: ```
public typeArray= [
{
id: 'MOTHER',
name: '{{ myApp.type.MOTHER | translate }}'
}];
```
How can we write to translate while defining an array in a TypeScript file?<issue_comment>username_1: You can use your pipes in your components or anywhere you want by importing them. Import your translate pipe to your component, then add it to constructor
```
constructor(private yourPipe: YourPipe) {}
```
or you can create a new instance from your pipe class:
```
public yourPipe: YourPipe = new YourPipe();
```
Then you can use it like this:
```
this.yourPipe.transform(value);
```
transform function will return the transformed value by the pipe.
So in your case:
```
public typeArray = [
{
id: 'MOTHER',
name: this.yourPipe.transform(myApp.type.MOTHER)
}
];
```
Upvotes: 2 <issue_comment>username_2: You have to import `TranslateService` from `'@ngx-translate/core'` and use its `get` method.
```
import { TranslateService } from '@ngx-translate/core';
constructor(private translateService: TranslateService) {}
method() {
this.translateService.get(myApp.type.MOTHER).subscribe((mother) =>
{
let typeArray= [{
id: 'MOTHER',
name: mother
}];
...
});
}
```
Upvotes: 0
|
2018/03/16
| 343 | 1,206 |
<issue_start>username_0: While trying to run metasploit in arch linux im getting
```
[root@archserver ~]# msfconsole
[-] Failed to connect to the database:
PG::InvalidParameterValue: ERROR: invalid value for parameter
"TimeZone": "UTC" : SET time zone 'UTC'
```
Postgres configuration is done and db is also created
My database.yml is
```
production:
adapter: postgresql
database: msf
username: root
password: <PASSWORD>
host: localhost
port: 5432
pool: 5
timeout: 5
```<issue_comment>username_1: Fixed it by editing SET time zone 'UTC' in `/opt/metasploit/vendor/bundle/ruby/2.4.0/gems/activerecord-4.2.10/lib/active_record/connection_adapters/postgresql_adapter.rb`
Upvotes: 1 [selected_answer]<issue_comment>username_2: This can also happen if you have multiple postgres instances running. This happened when I installed `postgres@9.4` via homebrew then later uninstalled and installed an updated version of postgres. What I failed to realize was, `postgres@9.4` was still running in the background
To verify. Open up a terminal and type:
```
ps axw | grep postgres
```
If multiple instances of postgres are found. Issue a `kill` command on the respective pid. i.e. `kill 234`
Upvotes: 1
|
2018/03/16
| 971 | 3,090 |
<issue_start>username_0: I keep running into trouble when working with ObjectIds and lodash. Say I have two arrays of objects I want to use lodash `_.unionBy()` with:
```
var arr1 = [
{
_id: ObjectId('abc123'),
old: 'Some property from arr1',
},
{
_id: ObjectId('def456'),
old: 'Some property from arr1',
},
];
var arr 2 = [
{
_id: ObjectId('abc123'),
new: 'I want to merge this with object in arr1',
},
{
_id: ObjectId('def456'),
new: 'I want to merge this with object in arr1',
},
];
var res = _.unionBy(arr1, arr2, '_id');
```
**Result**
```
console.log(res);
/*
[
{
_id: ObjectId('abc123'),
old: 'Some property from arr1',
},
{
_id: ObjectId('def456'),
old: 'Some property from arr1',
},
{
_id: ObjectId('abc123'),
new: 'I want to merge this with object in arr1',
},
{
_id: ObjectId('def456'),
new: 'I want to merge this with object in arr1',
},
]
*/
```
**Desired result**
```
[
{
_id: ObjectId('abc123'),
old: 'Some property from arr1',
new: 'I want to merge this with object in arr1',
},
{
_id: ObjectId('def456'),
old: 'Some property from arr1',
new: 'I want to merge this with object in arr1',
},
]
```
Since ObjectIds are objects and they are not pointing to the same reference in many cases (e.g. when fetching documents from MongoDB and comparing with local seed for testing), I cannot use '\_id' as iteratee.
How do I use lodash with ObjectIDs to achieve desired results?<issue_comment>username_1: Try this, I removed ObjectId because it does not work in javascript. You can use .toString for string conversion.
```js
var arr1 = [{
_id: 'abc123',
old: 'Some property from arr1',
},
{
_id: 'def456',
old: 'Some property from arr1',
},
];
var arr2 = [{
_id: 'abc123',
new: 'I want to merge this with object in arr1',
},
{
_id: 'def456',
new: 'I want to merge this with object in arr1',
},
];
const data = arr2.reduce((obj, ele) => {
if (!obj[ele._id]) obj[ele._id] = ele.new;
return obj;
}, {})
arr1 = arr1.map((d) => {
if (data[d._id]) {
d.new = data[d._id];
}
return d;
})
console.log(arr1);
```
Upvotes: 1 <issue_comment>username_2: You'll have to use [\_.unionWith](https://lodash.com/docs/4.17.5#unionWith) that allows you to use a custom comparator. Use the custom comparator to check equality between the two ObjectIds:
```
_.unionWith(arr1, arr2, (arrVal, othVal) => arrVal._id.equals(othVal._id));
```
Hope it helps.
Upvotes: 1 <issue_comment>username_3: This is what ended up solving my problem.
```
var res = arr1.map(a => {
return _.assign(a, _.find(arr2, { _id: a._id }));
});
```
Thanks to [Tushar's answer](https://stackoverflow.com/questions/35091975/how-to-use-lodash-to-merge-two-collections-based-on-a-key)
Upvotes: 1 [selected_answer]
|
2018/03/16
| 427 | 1,503 |
<issue_start>username_0: What is the difference between **strictSSL=false** and **rejectUnauthorized=false** options in NodeJS?
The names are confusing and I did not find documentation, explaining the difference.<issue_comment>username_1: I think these two flag options are used in different context and are not exactly comparable. On one hand, you can look at **rejectUnauthorized=false** flag in node **runtime context** which does as quoted in [this](https://stackoverflow.com/a/31862256/5617140) answer :
>
> By setting rejectUnauthorized: false, you're saying "I don't care if I
> can't verify the server's identity." Obviously, this is not a good
> solution as it leaves you vulnerable to MITM attacks.
>
>
>
Whereas you can look at **strictSSL=false** as more **build and setup context** as this is the flag you pass to npm when installing dependencies from an HTTP source rather than https as mentioned in [this](https://stackoverflow.com/a/48065862/5617140) post.
HTH.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The difference between the two is that **strictSSL** is part of the [request](https://www.npmjs.com/package/request) package and **rejectUnauthorized** is a native property of NodeJS. Both do the exact same thing in the end though. In the request package, rejectUnauthorized is set to false when strictSSL is set to false, which you can see [here](https://github.com/request/request/blob/3c0cddc7c8eb60b470e9519da85896ed7ee0081e/request.js#L254).
Upvotes: 1
|
2018/03/16
| 1,228 | 4,290 |
<issue_start>username_0: Basicly I'm trying to add an object with my own functions inside the object.prototype.
```
Object.prototype.personalTest = {
sum: function () {
return this.something + this.other;
}
};
var numbers = {
something: 1,
other: 2
};
console.log(numbers.personalTest.sum());
```
Problem is I can't get the value from the original object. The 'this' keyword uses my object as the 'this'.
How can I change the value of the 'this' or pass it to the object?
**Edit**
I did this and it kind of worked but not as I wanted
```
var personalAccess = function () {
var self = this;
this.PersonalTools = {
sum: function () {
return self.something + self.other;
},
minus: function () {
return self.something - self.other;
}
};
};
Object.prototype.personalTest = personalAccess;
var numbers = {
something: 1,
other: 2
};
console.log(numbers.personalTest());
```
The objects aren't part of the prototype anymore but that's not a problem.
The problem is that for each variable i have to build the objects using the function.
```
console.log(numbers.personalTest());
```
**..........Solution...........**
I ended up learning a bit more tricks on javascript and used **factory functions** to solve my issue.
```
(function () {
var myTools = function () {
var self = this;
var tools = {
sum: self.first + self.second
};
return tools;
};
Object.prototype.MyTools = myTools;
}());
```<issue_comment>username_1: The main problem is that you're thinking that the function `sum` declared within `personalTest` will have access to any attributes outside from it. The scope of function `sum` is the object `personalTest`.
An alternative, is either binding that object `numbers` to the function `sum` or executing the function `call` passing the object `numbers`.
```
numbers.personalTest.sum.bind(numbers)();
numbers.personalTest.sum.call(numbers);
```
```js
Object.prototype.personalTest = {
sum: function () {
return this.something + this.other;
}
};
var numbers = {
something: 1,
other: 2
};
console.log(numbers.personalTest.sum.bind(numbers)());
console.log(numbers.personalTest.sum.call(numbers));
```
Or, you can assign those values to `personalTest` to make them accesible from the function `sum`.
```js
Object.prototype.personalTest = {
sum: function () {
return this.something + this.other;
}
};
var numbers = {
something: 1,
other: 2
};
Object.assign(numbers.personalTest, numbers);
console.log(numbers.personalTest.sum());
```
Another alternative, is creating setters and getters to automatically set the necessary attributes to `personalTest`:
```js
Object.prototype.personalTest = {
sum: function () {
return this.something + this.other;
}
};
var numbers = {
set something(s) {
this.thing = this.personalTest.something = s;
},
get something() {
return this.thing;
},
set other(s) {
this.o = this.personalTest.other = s;
},
get other() {
return this.o;
}
};
numbers.something = 1
numbers.other = 2
console.log(numbers.personalTest.sum());
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 1 <issue_comment>username_2: ```
numbers.personalTest.sum.call(numbers)
```
Upvotes: 0 <issue_comment>username_3: The code:
```
Object.prototype.personalTest = {
sum: function () {
return this.something + this.other;
}
};
```
adds a new property named `personalTest` to each object you create.
This property is an object itself, having one property, `sum`, that is a function. Inside the `sum()` function, `this` refers to this object whose property `sum()` is (i.e. the object you create and store in `Object.prototype.personalTest`).
You can let `sum` access the properties of `numbers` by invoking it this way:
```
var numbers = {
something: 1,
other: 2
};
console.log(numbers.personalTest.sum.call(numbers));
```
This way, the function `sum()` (accessible only through the property `personalTest.sum` of any object) is invoked using `numbers` as `this`.
Read more about [`Function.call()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/call).
Upvotes: 1 [selected_answer]
|
2018/03/16
| 391 | 1,348 |
<issue_start>username_0: Experienced node js devs often recommend to use [npm pump](https://www.npmjs.com/package/pump) module instead of node [Stream.pipe](https://nodejs.org/dist/latest-v9.x/docs/api/stream.html#stream_readable_pipe_destination_options) method.
Why would I use one instead of the other?
There is a [similar looking question in SO](https://stackoverflow.com/questions/9726507/what-is-the-difference-between-util-pumpstreama-streamb-and-streama-pipestre) but its 6 years old. Its node 9.8.0 already and I guess things changed from that time.<issue_comment>username_1: From the `pump` README:
>
> When using standard `source.pipe(dest)` `source` will not be destroyed if `dest` emits close or an error. You are also not able to provide a callback to tell when then pipe has finished.
>
>
>
Upvotes: 0 <issue_comment>username_2: TL;DR: use [pipeline](https://nodejs.org/api/stream.html#stream_stream_pipeline_source_transforms_destination_callback)
As for Node.js 10.x or later version, pipeline is introduced to replace for pump. This is a module method to pipe between streams **forwarding errors and properly cleaning up** and provide a callback when the pipeline is complete.
But what's the difference between `pipe` and `pipeline`?
You can find my answer [here](https://stackoverflow.com/a/60459320/5732327)
Upvotes: 2
|
2018/03/16
| 487 | 1,500 |
<issue_start>username_0: i wrote a short Codepen where i tried to alter a temporary Array while keeping the original one, but both of my Arrays get altered.
Could someone explain me what the problem is?
```
var x = ["x"];
abc(x);
function abc(x){
for(var i = 0; i < 3; i++) {
var y = x;
y.unshift("y");
console.log(y);
console.log(x);
}
}
```
Output:
```
["y", "x"]
["y", "x"]
["y", "y", "x"]
["y", "y", "x"]
["y", "y", "y", "x"]
["y", "y", "y", "x"]
```
Thanks in advance.<issue_comment>username_1: There is no internal array. You have only one single array object in the memory. The `x` and `y` are only different variables which holds the same reference to the single array instance, because you have assign the value of `x` to the `y` which is a reference and that reference is just copied.
If you want to work with the copy of the array you can use `slice` function.
```js
var x = ["x"];
abc(x);
function abc(x) {
var y = x.slice();
for(var i = 0; i < 3; i++) {
y.unshift("y");
console.log('Array y: ' + y);
console.log('Array x: ' + x);
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Javascript handles objects by reference. So if you write `var y = x` it does not make a copy of your object but rather a copy of the reference. So when you update `y`, it updates `x` at the same time.
If you want to make a copy of your object, you can do the following:
```
var y = JSON.parse(JSON.stringify(x))
```
Upvotes: 0
|
2018/03/16
| 750 | 2,913 |
<issue_start>username_0: I'm using serverless to deploy an AWS CloudFormation template and some functions, here is a part of my serverless.yml file:
```
resources:
Resources:
MyUserPool: #An inline comment
Type: "AWS::Cognito::UserPool"
Properties:
UserPoolName: "MyUserPool"
Policies:
PasswordPolicy:
MinimumLength: 7
RequireLowercase: false
RequireNumbers: true
RequireSymbols: false
RequireUppercase: false
functions:
preSignUp:
handler: presignup.validate
events:
- cognitoUserPool:
pool: "MyUserPool"
trigger: PreSignUp
```
As you can see, both user pool names are the same, but when I run serverless deploy, 2 user pools with the same name are created.
[](https://i.stack.imgur.com/wRsHY.png)
Is this a bug or am I missing something?<issue_comment>username_1: I also found this to be counter-intuitive and confusing at first. However, this *is* actually the expected (and documented) behavior.
When you attach a Cognito event to a function as a trigger, *Serverless* will create a user pool for you, without even being asked. [Source:](https://serverless.com/framework/docs/providers/aws/events/cognito-user-pool#simple-event-definition)
>
> This will create a Cognito User Pool with the specified name.
>
>
>
So in your case, one user pool is being created by the `cognitoUserPool` event, and the other is being created by your `Resources` section. The one created by `Resources` is correct (has the custom password policy), and the one created by the lambda trigger has default configuration. The fix is described under the ["Overriding a generated User Pool"](https://serverless.com/framework/docs/providers/aws/events/cognito-user-pool#overriding-a-generated-user-pool) heading.
You prefix the User Pool key in the Resources section with `CognitoUserPool`, which will cause both your trigger and your resource to refer to the same User Pool in the generated CloudFormation template.
In your case, this means simply changing this:
```
resources:
Resources:
MyUserPool:
Type: "AWS::Cognito::UserPool"
```
to this:
```
resources:
Resources:
CognitoUserPoolMyUserPool:
Type: "AWS::Cognito::UserPool"
```
Tested with Serverless 1.26.0
Upvotes: 4 [selected_answer]<issue_comment>username_2: The correct way to do this is by setting `existing: true` on your functions cognitoUserPool properties like so...
```
createAuthChallenge:
handler: services/auth/createAuthChallenge.handler
events:
- cognitoUserPool:
pool: ${self:custom.stage}-user-pool
trigger: CreateAuthChallenge
existing: true
```
[Serverless added support](https://github.com/serverless/serverless/pull/6362) for this in July 2019.
Upvotes: 3
|
2018/03/16
| 876 | 2,876 |
<issue_start>username_0: I am using `react` **v16.3.0** and `flow-bin` **v0.69.0**
Using react [Fragments](https://reactjs.org/docs/fragments.html) with either or the shorthand `<>` syntax like so
```
import React from 'react'
const ComponentA = () => (
Component
A
)
const ComponentB = () => (
<>
Component
B
)
```
Flow complains with the following error (it's an indentical error for both, just showing output for `ComponentA` here)
```
Cannot get React.Fragment because property Fragment is missing in object type [1].
24│ const ComponentA = () => (
25│
26│ Component
27│ A
28│
/private/tmp/flow/flowlib_2349df3a/react.js
251│ declare export default {|
252│ +DOM: typeof DOM,
253│ +PropTypes: typeof PropTypes,
254│ +version: typeof version,
255│ +initializeTouchEvents: typeof initializeTouchEvents,
256│ +checkPropTypes: typeof checkPropTypes,
257│ +createClass: typeof createClass,
258│ +createElement: typeof createElement,
259│ +cloneElement: typeof cloneElement,
260│ +createFactory: typeof createFactory,
261│ +isValidElement: typeof isValidElement,
262│ +Component: typeof Component,
263│ +PureComponent: typeof PureComponent,
264│ +Children: typeof Children,
265│ |};
```
With an explicit import of `Fragment`, however, flow does not complain.
```
import { Fragment, default as React } from 'react'
const ComponentC = () => (
Component
C
)
```
What is going on here? I would like to use the `<>` Fragment shorthand syntax, and this issue is stopping me from doing that for now.
When I dig into the `react.js` lib def referenced in the error it does appear that the error is factually correct - the export of `Fragment` *is* defined and Fragment *is not* defined as a property on the default export.
But the flow docs [state that flow has support for react Fragments](https://reactjs.org/blog/2017/11/28/react-v16.2.0-fragment-support.html#flow) from **v0.59** onwards.
So is this actually a gap in support that still exists? Or am I doing something wrong? Perhaps I somehow have an outdated lib def or have things configured wrong? I can't find anything googling for the error message, which leads me to suspect it's an issue with my setup. Also I can't quite believe that this wouldn't work out the box.<issue_comment>username_1: You have to use `import * as React from 'react'` to fix this :)
Upvotes: 3 <issue_comment>username_2: The fix to include `React.Fragment` in the definition is included in this commit:
<https://github.com/facebook/flow/commit/b76ad725177284681d483247e89739c292ed982b>
It should be available in flow `0.71`
Upvotes: 4 [selected_answer]<issue_comment>username_3: As simple as:
```
import * as React from 'react'
const Component = (): React.Element => (
<>
/
\
)
export default Component
```
Upvotes: 2
|
2018/03/16
| 453 | 1,576 |
<issue_start>username_0: Hi I am trying to develop navigation bar using CSS.
I am display menus in navigation but these menus are not displaying as expected. I am trying to display as below.
[](https://i.stack.imgur.com/ub5rJ.png)
```css
ul {
list-style-type: none;
margin: 0;
padding: 0;
overflow: hidden;
background-color: #333;
}
li {
float: left;
}
li a {
display: block;
color: white;
text-align: center;
padding: 14px 16px;
text-decoration: none;
}
li a:hover {
background-color: #111;
border-top: 4px solid #2e92fa;
}
```
```html
* Product Name
* Dashboard
* Reports
* Map
```
Can someone help me to change css classes in order to look like as below image? Any help would be appreciated. Thank you<issue_comment>username_1: You have to use `import * as React from 'react'` to fix this :)
Upvotes: 3 <issue_comment>username_2: The fix to include `React.Fragment` in the definition is included in this commit:
<https://github.com/facebook/flow/commit/b76ad725177284681d483247e89739c292ed982b>
It should be available in flow `0.71`
Upvotes: 4 [selected_answer]<issue_comment>username_3: As simple as:
```
import * as React from 'react'
const Component = (): React.Element => (
<>
/
\
)
export default Component
```
Upvotes: 2
|
2018/03/16
| 2,856 | 10,930 |
<issue_start>username_0: From my understanding about git, every time I perform a `git checkout` one of two things happens:
1. The branch already exists locally and so the HEAD is simply positioned on the top of it.
2. The branch does not exist locally and so git "clones" it from the remote repository (let's just assume that git refs are updated)
However, for several times I perform a `git checkout` to a remote branch (that never existed locally) and I get an outdated content. Then I perform a `git pull` and new commits are received.
Does anyone had this problem too? Do you know why this happens?<issue_comment>username_1: Git checkout branch updates the files in the working directory to version stored in that branch.
To pull the remote changes you have to run git pull origin remote branch.
Upvotes: 1 <issue_comment>username_2: git checkout doesn't clone anything from the remote repository. It at most points the local branch to the last head of the remote branch last time you fetched it. If there's anything on top of this last fetched head, then you'll have to fetch/pull.
Upvotes: 1 <issue_comment>username_3: You can avoid using `git pull` (entirely, or just sometimes, this is up to you). You do need to run `git fetch` sometimes, and some other commands sometimes.
The way to keep this all straight in your head is itself a little complicated, but start with these:
* There are *two* repositories involved: yours, and the one at `origin`. (There can be even more than two, but start with two, it just gets hairier if you add more!)
* *Your* Git repository has what Git calls a *remote*, which is essentially just a name: `origin`. This name stores the URL for othe other Git repository.
* Each repository is self-contained. Each repository has *its own* set of branches, tags, and so on.
* Any one Git repository can call up any other Git repository via some URL, using the Internet as a sort of telephone or messaging connection. Using a remote name, like `origin`, is almost always the way to go here. Among other things, it means you only have to type in a long URL once.
If you run `git config -l` (list all of your configuration) or `git config --get-regex "remote\..*"` you should see at least two entries:
```
remote.origin.url
remote.origin.fetch +refs/heads/\*:refs/remotes/\*
```
The first one is the saved URL. The second one is some directives for the `git fetch` command.
### Connecting two Git repositories
Since there are two Git repositories involved here, you have to connect them to each other now and then. There are two primary Git commands for doing this, `git fetch` and `git pull`. Both direct *your* Git to call up the other Git, so the difference is the *direction of transfer:*
* `git fetch` has your Git call up their Git and *get* things;
* `git push` has your Git call up their Git and *give* things.
What you give or take here are *commits*. While commits *hold* files (by holding a complete snapshot), commits aren't files in and of themselves, so it's wrong to think of this as pushing or fetching *files*. It's always *whole commits*.
But there is a huge wrinkle: commits, in Git, have to be *findable*.
### Finding commits
Let's draw a tiny repository with just three commits in it. Commits have big ugly hash IDs, which appear random (though they're not); rather than inventing some, let's use single uppercase letters. This limits our pseudo-repository to just 26 ASCII commits (though maybe we could name a commit Ø in Norwegian, for instance, to gain a few more), but it's a lot more convenient.
A commit stores its *parent* commit's hash ID inside it, so that the commit *points back* to its parent:
```
A <-B <-C
```
`C` is our most recent commit, and it records the fact that `B` is its parent. `B` records that `A` is `B`'s parent. Since `A` was our first commit, it has no parent (it's a *root commit* in Git terms) and we just stop there. But how do we find `C`? The answer is that we use a *branch name* like `master`:
```
A <-B <-C <--master
```
To add a *new* commit to our repository, we compute its hash ID—in our simplified drawing, this is just `D`—and write it out, setting its parent to the *current* commit `C`. Then we *change* master so that it points to `D` instead of `C`:
```
A--B--C--D <-- master
```
We never change any *existing* commit, and we don't really need to record the direction of their arrows: they always point backwards. But we do change branch names, all the time, so we should write down their arrows, since they move.
Git therefore works *backwards*. It always has the information about the *newest* commits. It uses those to find older commits. Git attaches the name `HEAD` to one of the branches, so that it knows which branch you're on. When you run `git checkout`, one of the things it does is to attach `HEAD` to whichever branch you checked out. I'll start including that below.
### Remote-tracking names
Let's go back to the fact that there are two Git repositories involved here. One of them is *yours*. You have your own branch names like `master` and `develop` and `feature/short` and `feature/tall` and so on. But there's another Git repository over at `origin`, and it has *its* branch names.
When your Git calls up their Git and obtains their commits, their Git has been finding their commits by *their* branch names. What if their `master` and your `master` don't agree about which commit they should point to? You've added `D` and they don't have `D` yet, so *their* `master` still points to `C`, for instance.
Your Git records *their* branch pointers by *renaming them*. Your `origin/master` remembers their `master`:
```
D <-- master (HEAD)
/
A--B--C <-- origin/master
```
If they've added a *new* commit to their `master` since you were last in sync, that commit has a different (and unique) hash ID. Let's call it `E`:
```
D <-- master (HEAD)
/
A--B--C
\
E <-- origin/master
```
### `git checkout` will *create* a branch if appropriate
Suppose you have, in your repository, some series of commits, plus some names:
```
D <-- master (HEAD)
/
A--B--C <-- origin/master, origin/dev
```
If you now say `git checkout dev`, well, you don't *have* a `dev`. But you do have `origin/dev`, pointing to commit `C`. Your Git notices this and automatically *creates* your `dev` now:
```
D <-- master
/
A--B--C <-- dev (HEAD), origin/master, origin/dev
```
Note how the branch *name* is new, even though the commit is not. The name `HEAD` is now attached to the new branch name.
If you `git checkout master` again, your `dev` continues to exist, pointing to `C`:
```
D <-- master (HEAD)
/
A--B--C <-- dev, origin/master, origin/feature
```
The only thing that happens is that your `HEAD` attaches to your existing `master` (and of course Git checks out commit `D` as well).
If you now `git fetch` from `origin` again, and they've added commit `E` to their `master` and `F` to their `dev`, with `E` pointing back to `C` and `F` pointing back to `E`, you get:
```
D <-- master (HEAD)
/
A--B--C <-- dev
\
E <-- origin/master
\
F <-- origin/dev
```
### Putting this together
When you run `git fetch`, you have your Git call up their Git, list all their branch names and their commit hashes, and then your Git obtains, from their Git, any commits *they* have that you don't. Your Git adds those to your repository and updates your *remote-tracking names*.
When you first `git clone` their repository, `git clone` makes a new, empty repository (like `git init`) with nothing at all, not even a `master` branch yet. Your `git clone` sets up the remote `origin` with the URL and default `fetch` line. Then your Git calls up their Git (`git fetch`), asks them for their branch names, asks them for the commits they have that you don't—which is every commit, of course—and puts all those commits into your empty repository, using only the remote-tracking names:
```
A--B--C <-- origin/master
```
As a last step, `git clone` in effect runs `git checkout master`. This *creates* your `master`, also pointing to commit `C`.
Each later `git fetch` updates all your remote-tracking names—your `origin/*` names—while obtaining the (shared) commits. Your remote-tracking names therefore remember their branch names, while your own existing branch names are left alone.
Thus, if you `git fetch` before running a `git checkout` that will create a *new* branch name, your *new* branch name will be created from the updated remote-tracking name. If you `git checkout` the name too early, you'll create it from the *old* value—the old hash ID of the commit you already have.
### Using `git pull`, or `git merge`, or `git rebase`
The `git pull` command just runs two commands for you:
* `git fetch`, which does all the above: it obtains any new commits and updates your remote-tracking names, but never affects *your* branches.
* A second Git command, so as to affect your *current* branch.
Usually you ran `git fetch` because you expected to get new stuff from the other Git repository. If you *did* get new stuff, you probably want to do something with it. That means doing something with *your* branch(es).
There are primarily two ways to incorporate any work you have done and committed, with work other people have done and committed. These are `git merge` and `git rebase`. So it's pretty typical, after `git fetch`, that you want to use one of these two commands.
Which one *should* you use? Well, that's a matter of opinion and there are different schools of thought about this. I like to choose which one to use based on how much work I did and how much work they did and how those bits of work relate. To do so, I have to *look at* the work they did.
Using `git pull`, you must decide in advance whether to merge or rebase, before you have a chance to look. So I *avoid* `git pull`. I run `git fetch`, then look, *then* decide what to do. You can't do this if you use `git pull`: you have to figure out which to do, merge or rebase, before you can see which one you want. Sometimes you might just know anyway, in which case, `git pull` is fine!
In any case, if you are using `git pull`, you tell Git which to do: merge (the default), or `--rebase` to rebase. It then runs `git fetch` for you, and runs the second command—`git merge` or `git rebase`—for you. And that's all it really does!1 It's a good idea to know how `git merge` and `git rebase` work, and I think you'll learn them much faster if you run them manually, instead of having `git pull` run them for you, but you now have all the pieces you need to make your own decisions here.
---
1Well, if there are *submodules*, you can have it recursively pull in submodules. But that's another can of worms entirely.
Upvotes: 5 [selected_answer]
|
2018/03/16
| 308 | 1,149 |
<issue_start>username_0: I created .appinstaller file for managing instalation via App Installer.
I followed this docs: <https://learn.microsoft.com/en-us/windows/uwp/packaging/install-related-set>
And I have 3 dependencies for it:
[](https://i.stack.imgur.com/1ribc.png)
Now I want to add metadata about it into `Dependencies` section like this:
```
```
The question is from where can I get metadata info about this packages? (Name, Version, Publisher)?<issue_comment>username_1: ".appx" files can be opened as archive files.
To get the dependencies metadata, open the ".appx" file in 7-zip.
Then open the "AppxManifest.xml" file at the root of the archive.
You will find the name, publisher, and version in the tag...
Upvotes: 3 [selected_answer]<issue_comment>username_2: Actually, I tried to find dependensies because of error "Failed to Install a Dependency". As I understood after you have just create `.appxbundle` in `Release` configuration. In this way all of reqiure dependencies will install with your package and you'll rid of this error in this way.
Upvotes: 0
|
2018/03/16
| 381 | 1,369 |
<issue_start>username_0: I have a map function that is iterating through an array and populating a table like this:
```
var programData = this.state.data.map(program => (
|
{program.scheduledStartDateTime}
|
{program.searchableTitles[1].value.en}
|
));
```
However, sometimes the second field: `program.searchableTitles[1].value.en` is empty thus resulting in the error: `TypeError: Cannot read property 'value' of undefined`
How can I handle this exception? I've tried adding Try and Catch but I can't seem to find a way to make it work within JSX code.
I don't mind what goes into the field where there is no data present, it can be left empty.<issue_comment>username_1: Try this way
```
var programData = this.state.data.map(program => (
|
{program.scheduledStartDateTime}
|
{program.searchableTitles && program.searchableTitles[1] && program.searchableTitles[1].value.en}
|
));
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you are getting the data through an API call then the data might not be immediately available or if you the state is not set immediately.
You can display different view instead
```
var programData = this.state.data.map(program => (
|
{program.scheduledStartDateTime}
|
{program.searchableTitles[1] ?
program.searchableTitles[1].value.en : Not available }
|
));
```
Upvotes: 0
|
2018/03/16
| 1,189 | 4,359 |
<issue_start>username_0: I am trying to add an array to exiting array. Its getting added but array inside array is what the out.
Current scenario
```
Array
(
[t373980] => stdClass Object
(
[tid] => 373980
[name] => Ability
[depth] => 0
[startMonday] => 0
[hidden_name] => selected_agency[373980]
[parent_tid] => 0
[full_label] => Ability
[full_tid] => 373980
[expanded] => 0
)
[t414605] => stdClass Object
(
[tid] => 414605
[name] => Ad Council
[depth] => 0
[startMonday] => 0
[hidden_name] => selected_agency[414605]
[parent_tid] => 0
[full_label] => Ad Council
[full_tid] => 414605
[expanded] => 0
)
[t0] => stdClass Object
(
[t] => Array
(
[tid] => 0
[name] => (Blank)
[depth] => 0
[startMonday] => 0
[hidden_name] => selected_agency[0]
[parent_tid] => 0
[full_label] => (Blank)
[full_tid] => 0
[expanded] => 0
)
)
)
```
\*\* what I want is \*\*
```
Array
(
[t373980] => stdClass Object
(
[tid] => 373980
[name] => Ability
[depth] => 0
[startMonday] => 0
[hidden_name] => selected_agency[373980]
[parent_tid] => 0
[full_label] => Ability
[full_tid] => 373980
[expanded] => 0
)
[t414605] => stdClass Object
(
[tid] => 414605
[name] => <NAME>
[depth] => 0
[startMonday] => 0
[hidden_name] => selected_agency[414605]
[parent_tid] => 0
[full_label] => Ad Council
[full_tid] => 414605
[expanded] => 0
)
[t] => stdClass Object
(
[tid] => 0
[name] => (Blank)
[depth] => 0
[startMonday] => 0
[hidden_name] => selected_agency[0]
[parent_tid] => 0
[full_label] => (Blank)
[full_tid] => 0
[expanded] => 0
)
)
$no_agency_arr =array("tid"=>"0", "name"=>"(Blank)", "depth"=>0, "startMonday"=>0, "hidden_name"=>"selected_agency[0]",
"parent_tid"=>"0", "full_label"=>"(Blank)", "full_tid"=>"0", "expanded"=>0);
$no_agency_obj = (object)$no_agency_arr;
$final_no_agency_arr["t"] = $no_agency_obj;
array_push($out,$final_no_agency_arr);
```<issue_comment>username_1: In your code replace this
```
$no_agency_obj = (object)$no_agency_arr;
$final_no_agency_arr["t"] = $no_agency_obj;
```
with this
```
$final_no_agency_arr["t"] = $no_agency_arr;
```
No need to create an object if you just need an Array!!
Upvotes: -1 <issue_comment>username_2: ```
$no_agency_arr =array("tid"=>"0", "name"=>"(Blank)", "depth"=>0, "startMonday"=>0, "hidden_name"=>"selected_agency[0]",
"parent_tid"=>"0", "full_label"=>"(Blank)", "full_tid"=>"0", "expanded"=>0);
//you need array so no need to convert it to object. so comment/remove it out.
//$no_agency_obj = (object)$no_agency_arr;
$final_no_agency_arr["t"] = $no_agency_arr;
//why are you using it when you already assigned value in that array. so comment it,
//if `$out` if the final array which you are printing, so you need to assign value in it instead of `$final_no_agency_arr` like:
$out["t"] = $no_agency_arr; //don't forget to comment out the above line
//array_push($out,$final_no_agency_arr);
```
[Check here for reference](https://3v4l.org/WigK6)
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,039 | 3,721 |
<issue_start>username_0: Size of data to get: 20,000 approx
Issue: searching Elastic Search indexed data using below command in python
but not getting any results back.
```
from pyelasticsearch import ElasticSearch
es_repo = ElasticSearch(settings.ES_INDEX_URL)
search_results = es_repo.search(
query, index=advertiser_name, es_from=_from, size=_size)
```
**If I give size less than or equal to 10,000 it works fine but not with 20,000**
Please help me find an optimal solution to this.
PS: On digging deeper into ES found this message error:
Result window is too large, from + size must be less than or equal to: [10000] but was [19999]. See the scrolling API for a more efficient way to request large data sets.<issue_comment>username_1: Probably its [ElasticSearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-from-size.html) constraints.
```
index.max_result_window index setting which defaults to 10,000
```
Upvotes: 0 <issue_comment>username_2: for real time use the best solution is to use the [search after query](https://www.elastic.co/guide/en/elasticsearch/reference/master/search-request-search-after.html) . You need only a date field, and another field that uniquely identify a doc - it's enough a `_id` field or an `_uid` field.
Try something like this, in my example I would like to extract all the documents that belongs to a single user - in my example the user field has a `keyword datatype`:
```
from elasticsearch import Elasticsearch
es = Elasticsearch()
es_index = "your_index_name"
documento = "your_doc_type"
user = "<NAME>"
body2 = {
"query": {
"term" : { "user" : user }
}
}
res = es.count(index=es_index, doc_type=documento, body= body2)
size = res['count']
body = { "size": 10,
"query": {
"term" : {
"user" : user
}
},
"sort": [
{"date": "asc"},
{"_uid": "desc"}
]
}
result = es.search(index=es_index, doc_type=documento, body= body)
bookmark = [result['hits']['hits'][-1]['sort'][0], str(result['hits']['hits'][-1]['sort'][1]) ]
body1 = {"size": 10,
"query": {
"term" : {
"user" : user
}
},
"search_after": bookmark,
"sort": [
{"date": "asc"},
{"_uid": "desc"}
]
}
while len(result['hits']['hits']) < size:
res =es.search(index=es_index, doc_type=documento, body= body1)
for el in res['hits']['hits']:
result['hits']['hits'].append( el )
bookmark = [res['hits']['hits'][-1]['sort'][0], str(result['hits']['hits'][-1]['sort'][1]) ]
body1 = {"size": 10,
"query": {
"term" : {
"user" : user
}
},
"search_after": bookmark,
"sort": [
{"date": "asc"},
{"_uid": "desc"}
]
}
```
Then you will find all the doc appended to the `result` var
If you would like to use `scroll query` - doc [here](http://elasticsearch-py.readthedocs.io/en/master/helpers.html#scan):
```
from elasticsearch import Elasticsearch, helpers
es = Elasticsearch()
es_index = "your_index_name"
documento = "your_doc_type"
user = "<NAME>"
body = {
"query": {
"term" : { "user" : user }
}
}
res = helpers.scan(
client = es,
scroll = '2m',
query = body,
index = es_index)
for i in res:
print(i)
```
Upvotes: 5 [selected_answer]
|
2018/03/16
| 810 | 3,569 |
<issue_start>username_0: Before you assume, I did read ALL other posts on this problem and I was unable to find a solution to my problem.
So the thing is, however and wherever i upload my files and folders on my web host i get the same result giving me the "currently unable to handle this request. HTTP ERROR 500". I'm using the 000webhostapp.
So I uploaded the project in my root directory "/", content of the public to the public\_html project, and it gave me the text above. Then I tried moving my whole project into the public\_html(public was its own directory inside public\_html) and it gave me the same result. I've tried some solutions with .htaccess file but whatever I tried won't make it work. In my localhost project is installed somewhat like this "htdocs/kola/..", but on the web hosting it is just in the root, no other dir(that's something I think might help but I'm unable to use). So after 30 hours of trying and reuploading the project 5 times, still can't make it work and I'd be rather grateful if someone could even try to help me with this.
Thanks in advance<issue_comment>username_1: The right way is to get to the root of your folder ... ie `/home/` and create a folder for your project. Then move the all the contents of your project into this folder except the public folder. Now go to the `public_html` folder and add all the contents of the public folder there.
Update your `index.php` as below:
```
php
/**
* Laravel - A PHP Framework For Web Artisans
*
* @package Laravel
* @author <NAME> <<EMAIL>
*/
/*
|--------------------------------------------------------------------------
| Register The Auto Loader
|--------------------------------------------------------------------------
|
| Composer provides a convenient, automatically generated class loader for
| our application. We just need to utilize it! We'll simply require it
| into the script here so that we don't have to worry about manual
| loading any of our classes later on. It feels great to relax.
|
*/
require __DIR__.'/../(name of your root folder)/bootstrap/autoload.php';
/*
|--------------------------------------------------------------------------
| Turn On The Lights
|--------------------------------------------------------------------------
|
| We need to illuminate PHP development, so let us turn on the lights.
| This bootstraps the framework and gets it ready for use, then it
| will load up this application so that we can run it and send
| the responses back to the browser and delight our users.
|
*/
$app = require_once __DIR__.'/../(name of your root folder)/bootstrap/app.php';
/*
|--------------------------------------------------------------------------
| Run The Application
|--------------------------------------------------------------------------
|
| Once we have the application, we can handle the incoming request
| through the kernel, and send the associated response back to
| the client's browser allowing them to enjoy the creative
| and wonderful application we have prepared for them.
|
*/
$kernel = $app->make(Illuminate\Contracts\Http\Kernel::class);
$response = $kernel->handle(
$request = Illuminate\Http\Request::capture()
);
$response->send();
$kernel->terminate($request, $response);
```
Configure the `.env` file and have the right database configuration.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Need not to create new dir. Just delete the content of index.php file and paste the above code and replace your (name of your root folder) with public\_html.
Upvotes: 0
|
2018/03/16
| 555 | 1,922 |
<issue_start>username_0: I can't figure out what's the proper way of importing a Typescript npm module.
Here's how I'm trying to do it:
**module package.json**
```
{
"name": "my-module",
"main": "src/myModule.ts"
}
```
**module src/myModule.ts**
```
export module MyModule {
// Code inside
}
```
**code using the npm module**
```
import { MyModule } from 'my-module'; // Doesn't work
import { MyModule } = require('my-module'); // Doesn't work.
```
The module is installed as a dependency in the package.json, and for example I can do
```
import { MyModule } from '../node_modules/my-module/src/myModule.ts';
```
But obviously this isn't great. What I want is a way to just import any exports that are in the main module file, but it doesn't seem possible.<issue_comment>username_1: The 'main' in package.json is useful only to packaging tools like webpack or the build tool of angular-cli. It is used to select different bundles according to the user's needs: ES6, ES5, UMD...
TypeScript ignores that. You need to specify the file you want, exactly as if you were refering to your own project:
```
import { MyModule } from 'my-module/src/myModule';
```
What other libraries like Angular do is to create a barrel, a file usually called 'index.ts' or 'index.d.ts', that imports and exports all types in the library.
The advantage of this is that, if you create a index.d.ts file in the root of my-module:
```
export { MyModule } from './src/myModule';
// other exports
```
You can simply do this:
```
import {MyModule} from 'my-module'
```
As typescript, when importing from a folder, automatically uses a `index.ts` or `index.d.ts` file.
Upvotes: 4 [selected_answer]<issue_comment>username_2: You should use "types" property instead of "main" property with typescript modules. [How TypeScript resolves modules](https://www.typescriptlang.org/docs/handbook/module-resolution.html)
Upvotes: 2
|
2018/03/16
| 581 | 1,753 |
<issue_start>username_0: I want a column vector as my output but instead getting a single dimension array. Please check the code in the fiddle. What have I done wrong?
My Current and Expected output are:
**Current Output:** Array `[ 0, 1 ]` Array `[ 5, 10 ]`
**Expected Output:** Array `[ 0 ]` Array `[ 1 ]` Array `[ 5 ]` Array `[ 10 ]`
Please check **Browser Console.**
How can I get my Expected Output?
Js Fiddle: <https://jsfiddle.net/4o1dj1uw/2/>
Code:
```
this.b = [[[0,1]],[[5,10]]];
function convertarray(arrToConvert) {
//it will convert a matrix in array
var newArr = [];
for (var i = 0; i < arrToConvert.length; i++) {
newArr = newArr.concat(arrToConvert[i]);
}
return newArr;
}
console.log(...convertarray(this.b))
```<issue_comment>username_1: Almost there, you need to enter to the deepest array.
*An alternative is using recursion.*
```js
this.b = [[[0,1]],[[5,10]]];
function convertarray(arrToConvert) {
//it will convert a matrix in array
var newArr = [];
for (var i = 0; i < arrToConvert.length; i++) {
if (Array.isArray(arrToConvert[i])) {
newArr = newArr.concat(convertarray(arrToConvert[i]));
} else {
newArr.push([arrToConvert[i]]);
}
}
return newArr;
}
console.log(JSON.stringify(convertarray(this.b)))
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 2 <issue_comment>username_2: Push it twice through `convertarray()`:
```
console.log(convertarray(convertarray(this.b)))
```
If you want to get an array of each item of these you can do:
```
console.log(...convertarray(convertarray(this.b)).map(x => [x]));
```
Result
```
[0] [1] [5] [10]
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,218 | 3,192 |
<issue_start>username_0: I have two table Claim and Resubmission. **Claim** table has one **Resubmission** (i.e.,)**Resubmission** table belongs to **Claim** table.
Below is my table structure:
```
Claim:
ClaimPKID | Net | Gross | Date
1 | 2000 | 6000 | 2018-01-02
2 | 1000 | 1500 | 2018-02-13
3 | 1500 | 2100 | 2018-02-25
4 | 5000 | 6700 | 2018-02-22
-----------------------------
Resubmission:
ResubmissionPKID | ClaimID | Comment
1 | 2 | abc
2 | 3 | abc
3 | 2 | abc
4 | 3 | abc
```
What i want is I want to display Total of the **Gross** and **Net** amount
with the detail of **First submission** or **Resubmission**.
If Claim Table ClaimPKID is stored in Resubmission (Which means if the claim has resubmisson) then i want to group those values seperately.
For example
Result:
```
Net | Gross | Claim Type
-----------------------------------------------------------------
7000 (2000+5000) | 12700 (6000+6700) | First Submission
2500 (1000+1500) | 3600 (1500+2100) | Resubmission
```
So i want to group values based on **ClaimPKID** from **Claim** Table exist in **Resubmission** Table.
I have tried below code but its not working its showing all the columns total value in a single row:
```
SELECT ROUND(coalesce(SUM(c.Gross), 0), 2) as gross,
ROUND(coalesce(SUM(c.Net), 0), 2) as net,
MAX(c.ClaimPKID)
FROM `Claim` as c
LEFT JOIN Resubmission r on r.ClaimID = c.ClaimPKID
WHERE c.Date BETWEEN '2018-01-01' AND '2018-02-28'
group by r.ClaimID
```
Kindly help me..<issue_comment>username_1: You can try this
```
DECLARE @t TABLE(ClaimPKID int, Net int, Gross int, [Date] datetime)
INSERT INTO @t VALUES(1 ,2000 ,6000, '2018-01-02')
INSERT INTO @t VALUES(2 ,1000 ,1500, '2018-02-13')
INSERT INTO @t VALUES(3 ,1500 ,2100, '2018-02-25')
INSERT INTO @t VALUES(4 ,5000 ,6700, '2018-02-22')
DECLARE @t2 TABLE(ResubmissionPKID int, ClaimID int, Comment varchar(50))
INSERT INTO @t2 VALUES(1 ,2 ,'abc')
INSERT INTO @t2 VALUES(2 ,3 ,'abc')
INSERT INTO @t2 VALUES(3 ,2 ,'abc')
INSERT INTO @t2 VALUES(4 ,3 ,'abc')
select SUM(NET), SUM(GROSS),(CASE WHEN (ClaimID IS NULL )THEN 'A' ELSE 'B' END) from @t A
left join @t2 B ON A.ClaimPKID = B.ClaimID
GROUP BY (CASE WHEN (ClaimID IS NULL )THEN 'A' ELSE 'B' END)
```
Upvotes: 0 <issue_comment>username_2: You can try use `UNION ALL` combine two species
```
SELECT SUM(Net) AS Net,SUM(Gross) AS Gross,'Resubmission' AS 'Claim Type'
FROM Claim T
INNER JOIN
(
SELECT ClaimID
FROM Resubmission
GROUP BY ClaimID
) T2 ON T.ClaimPKID = T2.ClaimID
WHERE T.Date BETWEEN '2018-01-01' AND '2018-02-28'
UNION ALL
SELECT SUM(Net) AS Net,SUM(Gross) AS Gross,'First Submission' AS 'Claim Type'
FROM Claim T
WHERE ClaimPKID NOT IN
(
SELECT ClaimID
FROM Resubmission
GROUP BY ClaimID
) AND T.Date BETWEEN '2018-01-01' AND '2018-02-28'
```
[SQLFiddle](http://sqlfiddle.com/#!9/fa861b/3)
Upvotes: 2 [selected_answer]
|
2018/03/16
| 514 | 1,649 |
<issue_start>username_0: I have the following JavaScript variable I am using in conjunction with Google Maps API.
For some reason I am getting the error
>
> "Uncaught SyntaxError: Unexpected identifier"
>
>
>
I believe it has to do with the 4th line in the following code snip. I do not expect this abut it seems like there is a common syntax issue with the below?
```
var EditForm = ''+
''+
'Product:'+''+' '+
''+'✖'+
''+'* Adele
'+'* Agnes
'+'* Billy
'+
'* Bob
'+'* Calvin
'+'* Christina
'+'* Cindy
'+'* Doug
'+
'RastaurantBar'+
'House'+
'Place Name :'+
'Description :'+
''+
'
Add Product';
```<issue_comment>username_1: Your quotes `'` in `document.getElementById('userInput').value = ''` aren't escaped.
Change it to `document.getElementById(\'userInput\').value = \'\'` and it'll work
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your string is written by single quotes. So if you again want to use single quotes inside your string, you need to escape them using `\`. You can also replace the string `''` with double quotes `""`.
What about 4th line, you need not just put the variable but concatenate it as the rest of your string lines.
```
'
```
If your *userinput* is just the `id` of the element, not a variable you need to escape them.
```
'<a onclick="document.getElementById(\'userInput\').value = \'\''
</code>
```
Upvotes: 1 <issue_comment>username_3: To avoid any issue, you could use template litterals :
```
var EditForm = `
Product:
✖
* Adele
* Agnes
* Billy
* Bob
* Calvin
* Christina
* Cindy
* Doug
RastaurantBar
House
Place Name :
Description :
Add Product`;
```
Upvotes: 2
|
2018/03/16
| 1,064 | 4,627 |
<issue_start>username_0: I am new to Azure ML. I am having some doubts .Could anyone please clarify my doubts listed below.
1. What is the difference between Azure ML service Azure ML experimentation service.
2. What is the difference between Azure ML workbench and Azure ML Studio.
3. I want to use azure ML Experimentation service for building few models and creating web API's. Is it possible to do the same with ML studio.
4. And also ML Experimentation service requires me to have a docker for windows installed for creating web services.
Can i create web services without using docker?<issue_comment>username_1: I'll do my best to answer these questions and feel free to ask more questions. :)
>
> What is the difference between Azure ML service Azure ML experimentation service?
>
>
>
Essentially, Azure ML Service (I may reference this as Azure ML Studio) uses a drag and drop interface to build out your workflow and test models. Azure ML experimentation is a new offering from the Azure Portal to host them directly in Azure and offer a better way to manage your models. Experimentation will use Azure ML Workbench to build out your models.
>
> What is the difference between Azure ML workbench and Azure ML Studio?
>
>
>
The biggest difference is ML Studio has the drag and drop interface to build the workflow and models, whereas Workbench lets you use Python to programmatically build out your models. Workbench also includes a really nice and powerful way to clean your data from the app. In Studio you have some good modules to clean data, but I don't think it's as powerful as what you can do in Workbench.
EDIT: The Workbench application [is deprecated](https://learn.microsoft.com/en-us/azure/machine-learning/service/overview-what-happened-to-workbench) and has been replaced by/upgraded to [ML Services](https://learn.microsoft.com/en-us/azure/machine-learning/service/). The core functionality is unchanged, though.
>
> I want to use azure ML Experimentation service for building few models and creating web API's. Is it possible to do the same with ML studio?
>
>
>
I would actually say it's much easier to do this in ML Studio. The drag and drop interface is very intuitive and it is only a couple of clicks to create a web API to call your model. I feel, as it is currently at the time of this writing, is more complex to deploy your model and it involves using the Azure CLI.
>
> And also ML Experimentation service requires me to have a docker for windows installed for creating web services. Can I create web services without using docker?
>
>
>
Here I'm not too familiar with the Docker parts of Workbench, but I believe you can create and deploy without using Docker. It will require an Azure Model Management resource, though, I believe.
I hope this helps and, again, feel free to ask more questions.
Upvotes: 3 <issue_comment>username_2: 1. The AML Experimentation is one of our many new ML offerings, including data preparation, experimentation, model management, and operationalization. Workbench is a PREVIEW product that provides a GUI for some of these services. But it is just a installer/wrapper for the CLI that is needed to run. The services are Spark and Python based. Other Python frameworks will work, and you can get a little hacky to call Java/Scala from Python. Not really sure what you mean by an "Azure ML Service", perhaps you are referring to the operationalization service I mentioned above. This will quickly let you create new Python based APIs using Docker containers, and will connect with the model management account to keep track of the linage between your models and your services. All services here are still in preview and may breaking change before GA release.
2. Azure ML Studio is an older product that is perhaps simpler for some(myself an engineer not a data scientist). It offers a drag and drop experience, but is limited in it's data size to about 10G. This product is GA.
3. It is, but you need smaller data sizes, and the job flow is not spark based. I use this to do rapid PoC's. Also you will less control over the scalability of your scoring (batch or real time), because it is PaaS, compared to the newer service which is more IaaS. I would recommend looking at the new service instead of studio for most use cases.
4. The web services are completely based on Docker. Needing docker for experimentation is more about running things locally, which I myself rarely do. But, for the real time service, everything you package is placed into a docker container so it can be deployed to an ACS cluster.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 693 | 2,195 |
<issue_start>username_0: I have two mysql tables. One with article numbers and one with variants numbers.
No I want to join the tables so have a result table with every possible article/variant combination.
For example:
Article numbers table:
```
+-----------+-------------+
| ArticleNo | ArticleName |
+-----------+-------------+
| 0001 | Product 1 |
| 0002 | Product 2 |
| 0003 | Product 3 |
+-----------+-------------+
```
Variants numbers table:
```
+-----------+-------------+
| VariantNo | VariantName |
+-----------+-------------+
| 1001 | Variant 1 |
| 1002 | Variant 2 |
| 1003 | Variant 3 |
+-----------+-------------+
```
Result table:
```
+----------+---------------------+
| ResultNo | ResultName |
+----------+---------------------+
| 00011001 | Product 1 Variant 1 |
| 00011002 | Product 1 Variant 2 |
| 00011003 | Product 1 Variant 3 |
| 00021001 | Product 2 Variant 1 |
| 00021002 | Product 2 Variant 2 |
| 00021003 | Product 2 Variant 3 |
| 00031001 | Product 3 Variant 1 |
| 00031002 | Product 3 Variant 2 |
| 00031003 | Product 3 Variant 3 |
+----------+---------------------+
```<issue_comment>username_1: I think [this](https://www.w3resource.com/mysql/advance-query-in-mysql/mysql-cross-join.php) is what you are looking for. This kind of operation generates Cartesian Product. To solve your problem you should apply this query from link above and it should look like this:
```
select concat(ArticleNo,VariantNo ) as ResultNo ,
concat(ArticleName,VariantName) as ResultName
from Article
cross join Variants
```
Upvotes: -1 <issue_comment>username_2: You are looking for a Cartesian product. In SQL this is provided through `CROSS JOIN`:
```
select concat(a.ArticleNo, v.VariantNo) as ResultNo,
concat_ws(' ', a.ArticleName, v.VariantName) as ResultName
from articles a cross join
variants v
order by a.ArticleName, v.VariantName;
```
Upvotes: 2 <issue_comment>username_3: You can write Directly:
```
SELECT CONCAT(a.ArticleNo,v.VariantNo) as ResultNo,
CONCAT(a.ArticleName,v.VariantName) as ResultName
FROM Article a,Variants v
```
It will produce Cross Join
Upvotes: 3 [selected_answer]
|
2018/03/16
| 740 | 2,692 |
<issue_start>username_0: I am writing a bash script and want it to tell me if the names of the files in a directory appear in a text file and if not, remove them.
Something like this:
```
counter = 1
numFiles = ls -1 TestDir/ | wc -l
while [$counter -lt $numFiles]
do
if [file in TestDir/ not in fileNames.txt]
then
rm file
fi
((counter++))
done
```
So what I need help with is the `if` statement, which is still pseudo-code.<issue_comment>username_1: You can simplify your script logic a lot :
```
#/bin/bash
# for loop to iterate over all files in the testdir
for file in TestDir/*
do
# if grep exit code is 1 (file not found in the text document), we delete the file
[[ ! $(grep -x "$file" fileNames.txt &> /dev/null) ]] && rm "$file"
done
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: It looks like you've got a solution that works, but I thought I'd offer this one as well, as it might still be of help to you or someone else.
```
find /Path/To/TestDir -type f ! -name '.*' -exec basename {} + | grep -xvF -f /Path/To/filenames.txt"
```
### Breakdown
**`find`:** This gets file paths in the specified directory (which would be `TestDir`) that match the given criteria. In this case, I've specified it return only regular files (**`-type f`**) whose names don't start with a period (**`-name '.*'`**). It then uses its own builtin utility to execute the next command:
**`basename`:** Given a file path (which is what `find` spits out), it will return the base filename only, or, more specifically, everything after the last `/`.
**`|`:** This is a command pipe, that takes the output of the previous command to use as input in the next command.
**`grep`:** This is a regular-expression matching utility that, in this case, is given two lists of files: one fed in through the pipe from `find`—the files of your `TestDir` directory; and the files listed in `filenames.txt`. Ordinarily, the filenames in the text file would be used to match against filenames returned by `find`, and those that match would be given as the output. However, the **`-v`** flag inverts the matching process, so that `grep` returns those filenames that **do not** match.
What results is a list of files that exist in the directory `TestDir`, but **do not** appear in the `filenames.txt` file. These are the files you wish to delete, so you can simply use this line of code inside a parameter expansion **`$(...)`** to supply **`rm`** with the files it's able to delete.
The full command chain—after you `cd` into `TestDir`—looks like this:
```
rm $(find . -type f ! -name '.*' -exec basename {} + | grep -xvF -f filenames.txt")
```
Upvotes: 0
|
2018/03/16
| 828 | 2,142 |
<issue_start>username_0: I have a data set like this:
```
df <- data.frame(v1 = rnorm(10), col = rbinom(10, size=1,prob= 0.5))
rownames(df) <- letters[1:10]
> head(df)
v1 col
a -0.1806868 1
b 0.6934783 0
c -0.4658297 1
d 1.6760829 0
e -0.8475840 0
f -1.3499387 1
```
I plot it like this:
```
ggplot(df, aes(x = v1, y=rownames(df), group = col, color= col)) + geom_point()
```
[](https://i.stack.imgur.com/9ZJja.png)
Now I want to show only the rownames on the y-axis where `col` == 1.
The other names should not be displayed (but the points should be)
To add some context, I have a plot with many overlapping variable names on the y-axis, but I only want to display the names of the ones outside the dashed line[](https://i.stack.imgur.com/RtKup.png)<issue_comment>username_1: You could use `scale_y_discrete`:
```
set.seed(2017);
df <- data.frame(v1 = rnorm(10), col = rbinom(10, size=1,prob= 0.5))
rownames(df) <- letters[1:10]
library(ggplot2);
ggplot(df, aes(x = v1, y = rownames(df), group = col, color = col)) +
geom_point() +
scale_y_discrete(
limits = rownames(df),
labels = ifelse(df$col == 1, rownames(df), ""))
```
[](https://i.stack.imgur.com/ID7kZ.png)
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is not much to add to the answer given by @MauritsEvers, I just had the idea that for your plot it might be desirable to have fewer horizontal lines that guide your eye.
We can use the `breaks` argument in `scale_y_discrete` for that.
```
set.seed(1); df <- data.frame(v1 = rnorm(10), col = rbinom(10, size=1,prob= 0.5))
rownames(df) <- letters[1:10]
axis_labels <- which(df$col == 1)
ggplot(df, aes(x = v1, y=rownames(df), group = col, color= col)) +
geom_point() +
scale_y_discrete(breaks = rownames(df)[axis_labels])
```
[](https://i.stack.imgur.com/Ad31G.png)
Upvotes: 2
|
2018/03/16
| 906 | 3,241 |
<issue_start>username_0: I am trying to read the server-side logs for the buckets in my Google Cloud Storage project via the `gcloud` command-line program (to solve an error I get using the storage client).
It does not seem like the logs are available in the [Stackdriver Logging UI](https://console.cloud.google.com/logs/viewer).
So, first question: **are these logs available at all? If so, how do I access them?**
It *looks* like it should be possible via `gcloud`:
```
Mortens-MacBook-Pro:~ skyfer$ gcloud logging resource-descriptors list | grep -i storage
gcs_bucket A Google Cloud Storage (GCS) bucket.
project_id,bucket_name,location
```
But using `gcloud logging read` e.g. like the following doesn't work:
```
Mortens-MacBook-Pro:~ skyfer$ gcloud logging read "resource.type=gcs_bucket AND logName=projects/my-project/logs/ AND textPayload:StorageException" --limit 10 --format json
[]
ERROR: (gcloud.logging.read) INVALID_ARGUMENT: Name is missing the logs component. Expected the form projects/[PROJECT_ID]/logs/[ID]
```
So an important question is: **what is the syntax of the `ID`?** The documentation I have been able to find, e.g. [LogEntry](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry) and [Available Logs](https://cloud.google.com/logging/docs/view/logs_index), doesn't seem to give the answer.
I have tried various formats and none have worked: they either return syntax errors or no results. I have also tried leaving out `ID` but that simply returns an empty list:
```
Mortens-MacBook-Pro:~ skyfer$ gcloud logging read "resource.type=gcs_bucket AND logName=projects/my-project/logs/my-bucket" --limit 10 --format json
[]
```
If I change the name of the bucket to a bucket that doesn't exist it doesn't seem to make a difference, indicating that the bucket name is not part of the id.
The bucket exists in Google Cloud Storage: verified both via the UI and `gsutil`:
```
Mortens-MacBook-Pro:~ skyfer$ gsutil ls gs://my-bucket/
gs://my-bucket/03ea8f19-8135-4101-a04a-aef3f19b0fdb/
gs://my-bucket/59e86a67-d035-4e4a-bc56-7c2da5e8c908/
```
Any help would be much appreciated.<issue_comment>username_1: Yes the logs are not available in the stackdriver.
There is 3 types of logging for google cloud storage.
**Access logs:** They provide information for all of the requests made on a specified bucket and are created hourly
**Daily storage logs:** They provide information about the storage consumption of that bucket for the last day.
**Cloud Audit Logging:** This gives you access logs of API operations performed in Cloud Storage, you can find more details about it [here](https://cloud.google.com/storage/docs/audit-logs).
Your log files will have a specific format and you can find more about it [here](https://cloud.google.com/storage/docs/access-logs#downloading).
Once you figure the log file you want to download, you can download it via:
```
gsutil cp gs:///
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: you can list all the logs using below command
`gcloud logging logs list`
You can read a particular log using below command
`gcloud logging read`
Upvotes: 1
|
2018/03/16
| 446 | 1,641 |
<issue_start>username_0: I have a table containing:
```
table = [[1, 'FANTASTIC FOUR', 'EXOTIC SPACE'],[4, 'CRIMSON PEAK', 'MINIONS','SUPERMAN'],[20, 'FANTASTIC FOUR', 'EXOTIC SPACE']]
```
and I'm writing a python function to traverse through the table, look for similarities in the string elements and printing out in the format:
```
Movie: FANTASTIC FOUR, EXOTIC SPACE
UserID: 1,20 #since user 1 and user 20 both watch exactly the same movie
```
I have tried writing:
```
i = 0
while i
```
but it's not working very well. I'm not that good at using while loops for printing so i'll appreciate some help on this.<issue_comment>username_1: Yes the logs are not available in the stackdriver.
There is 3 types of logging for google cloud storage.
**Access logs:** They provide information for all of the requests made on a specified bucket and are created hourly
**Daily storage logs:** They provide information about the storage consumption of that bucket for the last day.
**Cloud Audit Logging:** This gives you access logs of API operations performed in Cloud Storage, you can find more details about it [here](https://cloud.google.com/storage/docs/audit-logs).
Your log files will have a specific format and you can find more about it [here](https://cloud.google.com/storage/docs/access-logs#downloading).
Once you figure the log file you want to download, you can download it via:
```
gsutil cp gs:///
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: you can list all the logs using below command
`gcloud logging logs list`
You can read a particular log using below command
`gcloud logging read`
Upvotes: 1
|
2018/03/16
| 652 | 2,372 |
<issue_start>username_0: I seem to have issues when trying to create a mongoose connection. I am following a book published in 2014 so my immediate response was to check the mongoose docs but they agree that the book gives the correct format. Below is my connection:
```
var dbURI = 'mongodb://localhost/Loc8r';
mongoose.connect(dbURI);
```
As soon as I add these lines, I get the following error:
```
Mongoose connection error: mongodb://127.0.0.1/Loc8r
(node:743) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): MongoNetworkError: failed to connect to server [127.0.0.1:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017]
(node:743) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
```
If I remove the connection, I have no errors...but that would defeat the purpose of trying to connect to a DB. I also tried substituting `localhost` in the URI for `127.0.0.1` which is known to solve some issues but I had no luck there either. I am running the `localhost` on a simple `$nodemon` command via terminal and nothing else.
Useful info:
* MongoDB v3.6.3
* Mongoose v5.0.10
* Express v4.15.5
* Node v8.9.4
Help would be appreciated.
Thanks.<issue_comment>username_1: Yes the logs are not available in the stackdriver.
There is 3 types of logging for google cloud storage.
**Access logs:** They provide information for all of the requests made on a specified bucket and are created hourly
**Daily storage logs:** They provide information about the storage consumption of that bucket for the last day.
**Cloud Audit Logging:** This gives you access logs of API operations performed in Cloud Storage, you can find more details about it [here](https://cloud.google.com/storage/docs/audit-logs).
Your log files will have a specific format and you can find more about it [here](https://cloud.google.com/storage/docs/access-logs#downloading).
Once you figure the log file you want to download, you can download it via:
```
gsutil cp gs:///
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: you can list all the logs using below command
`gcloud logging logs list`
You can read a particular log using below command
`gcloud logging read`
Upvotes: 1
|
2018/03/16
| 456 | 1,960 |
<issue_start>username_0: I recently updated an app for a client and it's now ready for submission to the App Store. But after a talk with my client they told me that the previous version (not developed or submitted by me) had a kind of "password protection" on the App Store. They explained it as anyone could find the app on App Store but when you click "download" the user would need to enter a password (not the Apple ID password, more of a predefined password specifically for this app) to continue the download process.
I am used to submitting apps to the App Store, both paid and free, but I have never done this and don't honestly know how. My closest guess is that we need to upgrade the plan to an enterprise account, but from my understanding (and please correct me if I'm wrong) this will remove the app from the App Store search and only allow download from a link or file?
What way would you guys recommend?
Thanks in advance!<issue_comment>username_1: Yes the logs are not available in the stackdriver.
There is 3 types of logging for google cloud storage.
**Access logs:** They provide information for all of the requests made on a specified bucket and are created hourly
**Daily storage logs:** They provide information about the storage consumption of that bucket for the last day.
**Cloud Audit Logging:** This gives you access logs of API operations performed in Cloud Storage, you can find more details about it [here](https://cloud.google.com/storage/docs/audit-logs).
Your log files will have a specific format and you can find more about it [here](https://cloud.google.com/storage/docs/access-logs#downloading).
Once you figure the log file you want to download, you can download it via:
```
gsutil cp gs:///
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: you can list all the logs using below command
`gcloud logging logs list`
You can read a particular log using below command
`gcloud logging read`
Upvotes: 1
|
2018/03/16
| 1,654 | 6,181 |
<issue_start>username_0: I'm trying to "simulate" namespacing in python. I'm using inner and outer class hirarchies to create my namespaces. For example you want to save paths of files (like resources) in one location. I tried something like this:
```
src = #path to source folder
class Resources:
root = src + "Resources\\"
class Fonts:
root = Resources.root + "fonts\\"
font1 = root + "font1.ttf"
font2 = root + "font2.ttf"
class Images:
root = Resources.root + "images\\"
logo = root + "logo"
image1= root + "image1"
class StyleSheets:
root = Resources.root + "stylesheets\\"
default = root + "default.qss"
class JsonData:
root = src + "Data\\"
class TableEntries:
root = JsonData.root
entries1 = root + "Entries1.json"
entries2 = root + "Entries2.json"
```
Accessing elements would look like this:
```
logoPath = Resources.Images.image1
```
Unfortunatly this isn't working due to the following error:
```
root = Resources.root + "fonts\\"
NameError: name 'Resources' is not defined
```
**My Question**
Is it possible to set class variables of inner class based on class variables of outer class? If not, is there another way to access the elements as shown above without using multiple files?<issue_comment>username_1: >
> Is it possible to set class variables of inner class based on class variables of outer class?
>
>
>
Not without ressorting to a custom metaclass to process the inner classes, which will certainly not help readability nor maintainability (and will be - rightly - seen by any experienced python programmer as a total WTF).
**EDIT :** *well actually for your example snippet the metaclass solution is not that complicated, cf the end of this answer*
The reason is that in Python almost everything happens at runtime. `class` is an executable statement, and the class object is only created and bound to it's name *after* the end of the whole class statement's body.
>
> If not, is there another way to access the elements as shown above without using multiple files?
>
>
>
Quite simply (dumbed down example):
```
import os
# use a single leading underscore to mark those classes
# as "private" (=> not part of the module's API)
class _Fonts(object):
def __init__(self, resource):
self.font1 = os.path.join(resource.root, "font1.ttf")
self.font2 = os.path.join(resource.root, "font2.ttf")
class _Resources(object):
def __init__(self, src):
self.root = os.path.join(rsc, "Ressources")
self.Fonts = _Fonts(self)
# then instanciate it like any other class
src = "/path/to/source/folder"
Resources = _Resources(src)
print(Resources.Fonts.font1)
```
EDIT : after a bit more thinking a metaclass-based solution for your use case would not be that complicated (but this will NOT be anything generic):
```
import os
class ResourcesMeta(type):
def __init__(cls, name, bases, attrs):
for name in attrs:
obj = getattr(cls, name)
if isinstance(obj, type) and issubclass(obj, SubResource):
instance = obj(cls)
setattr(cls, name, instance)
class SubResourceMeta(type):
def __new__(meta, name, bases, attrs):
if not bases:
# handle the case of the SubResource base class
return type.__new__(meta, name, bases, attrs)
root = attrs.pop("root")
cls = type.__new__(meta, name, bases, {})
cls._root = root
cls._attrs = attrs
return cls
class SubResource(metaclass=SubResourceMeta):
def __init__(self, parent):
self.root = os.path.join(parent.root, self._root)
for name, value in self._attrs.items():
setattr(self, name, os.path.join(self.root, value))
class Resources(metaclass=ResourcesMeta):
root = "/path/to/somewhere"
class Fonts(SubResource):
root = "fonts"
font1 = "font1.ttf"
font2 = "font2.ttf"
class Images(SubResource):
root = "images"
logo = "logo"
image1= "image1"
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: I think that you do not have clear the concept of class and instaces in OOP. If you want to store this kind of information `Resources` shoult not be a class, it should be an instance of a `Dir`class.
```
class Dir:
def __init__(self, path="/", parent=None):
self.parent = parent
self.path = path
self.contents = {}
def __getitem__(self, key):
return self.contents[key]
def create_subdir(name):
self.contents[name] = Dir(os.path.join(self.path + name), self)
def add_file(file):
self.contents[file] = file # You should probably also have a File type
# ...
resources = Dir(os.path.join(src, "Resources"))
resources.create_subdir("fonts")
fonts = resources["fonts"]
fonts.add_file("font1.ttf")
...
```
I've used `os.path.join` function to delegate to Python choosing the correct delimiter for each SO instead of hardcoding Windows delimiters as you have. The `__getitem__`method allows to get items as if the variable was a dictionary directly.
### EDIT:
You could take advantage of [pathlib](https://docs.python.org/3.6/library/pathlib.html#module-pathlib) standard module and add the attribute access notation (using '.' to acces the subdirectories) if you don't like the [div operator usage](https://docs.python.org/3.6/library/pathlib.html#operators) of pathlib.
```
from pathlib import Path as Path_, WindowsPath as WPath_, PosixPath as PPath_
import os
class Path(Path_):
def __new__(cls, *args, **kwargs):
return super().__new__(WindowsPath if os.name == 'nt' else PosixPath,
*args, **kwargs)
def __getattr__(self, item):
if item == '_str':
raise AttributeError
for i in self.iterdir():
if i.name == item:
return i
raise AttributeError
class WindowsPath(WPath_, Path):
pass
class PosixPath(PPath_, Path):
pass
current = Path()
subdir = current.subdir_name # current / 'subdir_name'
```
Upvotes: 0
|
2018/03/16
| 2,091 | 6,787 |
<issue_start>username_0: I want to check the size, max value, and min value of data type int, long and their unsigned form. The output of my program shows that both int and long have the same size, max, and min value, same goes to their unsigned form. Here is the output of my program:
```
Size of int : 4 byte, Max value : 2147483647, Min value : -2147483648
Size of long : 4 byte, Max value : 2147483647, Min value : -2147483648
Size of unsigned int : 4 byte, Max value : 4294967295, Min value : 0
Size of unsigned long : 4 byte, Max value : 4294967295, Min value : 0
```
And here is my code:
```
#include
#include
int main(int argc, char \*argv[])
{
printf("Size of int : %ld byte, Max value : %d, Min value : %d\n", sizeof(int), INT\_MAX, INT\_MIN);
printf("Size of long : %ld byte, Max value : %d, Min value : %d\n", sizeof(long), LONG\_MAX, LONG\_MIN);
printf("\n");
printf("Size of unsigned int : %ld byte, Max value : %u, Min value : %d\n", sizeof(unsigned int), UINT\_MAX, 0);
printf("Size of unsigned long : %ld byte, Max value : %lu, Min value : %d\n", sizeof(unsigned long), ULONG\_MAX, 0);
return 0;
}
```
My question is, is this normal that int and long have the same size, max value, and min value? I am running the program using `gcc version 5.1.0 (tdm64-1)` compiler on Windows 10 64-bit machine.<issue_comment>username_1: Yes this is perfectly normal. An architecture with a 32 bit `int`, and 32 bit `long`, an a 64 bit pointer is called an *LLP64 data model* and is favoured by Windows (despite an Intel CPU itself using a 48 bit pointer internally).
(A 64 bit Linux architecture uses a *LP64 data model* which has a 32 bit `int`, a 64 bit `long`, and a 64 bit pointer.)
The C standard states that the *minimum* range for an `int` is -32767 to +32767, and a `long` -2147483647 to +2147483647. So both schemes are compliant with this.
Upvotes: 2 <issue_comment>username_2: This relationship always hold:
```
short int <= int <= long int.
```
So in some cases `int` may have the same size as `long`, while in some other cases `int` may have the size as `short`. But `int` will never exceed `long` and will never fall below `short`. This is what the above statement(inequality) says.
In your case, `int` is equal to `long`.
Upvotes: 3 <issue_comment>username_3: The C standard does permit `int` and `long` to have the same size and range. It's easiest to explain the rules for unsigned types, which have *minimum maximums*: each unsigned integer type must be able to represent 0 through *at least* some number. This is the table:
```
type minimum maximum
unsigned char 255 (2**8 - 1)
unsigned short 65,535 (2**16 - 1)
unsigned int 65,535 (2**16 - 1)
unsigned long 4,294,967,295 (2**32 - 1)
unsigned long long 18,446,744,073,709,551,615 (2**64 - 1)
```
So you can see that a configuration in which `unsigned int` and `unsigned long` have the same range is perfectly allowed, as long as that range is at least as big as the minimum range for `unsigned long`. The *signed* types are required to have the same *overall* value range as their unsigned counterparts, but shifted so that almost exactly half of the values are negative — unfortunately it's not as simple as "−2n−1 … 2n−1 − 1", because the standard continues to permit non-twos-complement implementations even though nobody has manufactured a CPU that does that in many years.
It's possible that you thought `long` would be able to represent up to 263 − 1 because that's true on *most* "64-bit" operating systems. But "64-bit" Windows is an exception. Almost all operating systems that call themselves "64-bit" use what is known as an "LP64 [ABI](https://en.wikipedia.org/wiki/Application_binary_interface)", in which the integer types and `void *` have these sizes:
```
sizeof(short) == 2
sizeof(int) == 4
sizeof(long) == 8
sizeof(long long) == 8
sizeof(void *) == 8
```
Windows instead uses an "LLP64" ABI, with
```
sizeof(short) == 2
sizeof(int) == 4
sizeof(long) == 4
sizeof(long long) == 8
sizeof(void *) == 8
```
This is for backward compatibility with 32-bit Windows, in which `long` and `int` are also the same size; Microsoft thought too much existing code would break if they changed the size of `long`.
(Note that having `sizeof(void*) > sizeof(long)` is, for reasons too complicated to get into here, forbidden by the original 1989 C standard. Because they were determined to use LLP64 for 64-bit Windows, Microsoft rammed a change into C99 to permit it, over literally everyone else's explicit objections. And then, for over a decade after C99 came out, they didn't bother implementing the C99 features (e.g. `uintptr_t` and `%zu`) that were supposed to substitute for the relaxed requirement, leading to cumulative man-years of extra work for people trying to write programs that work on both Windows and not-Windows. Not That I'm Bitter™.)
Upvotes: 2 <issue_comment>username_4: From the C Standard (6.2.5 Types)
>
> 8 For any two integer types with the same signedness and different
> integer conversion rank (see 6.3.1.1), **the range of values of the type
> with smaller integer conversion rank is a subrange of the values of
> the other type**.
>
>
>
So all that is required is that the range of values of the type `int` would be no greater than the range of values of the type long.
And (6.3.1.1 Boolean, characters, and integers)
>
> 1 Every integer type has an integer conversion rank defined as
> follows:
>
>
> — The rank of long long int shall be greater than the rank of long
> int, which shall be greater than the rank of int, which shall be
> greater than the rank of short int, which shall be greater than the
> rank of signed char.
>
>
>
So though objects of the type `int` and `long` can have the same representation and correspondingly the same range of values nevertheless the rank of the type `long` is higher than the rank of the type `int`.
For example if the type `int` and `long` have the same representation the type of the expression `x + y` in the code snippet below will be `long`.
```
int x = 0;
long y = 0;
printf( "%ld\n", x + Y );
```
Not only the `int` and `long` types can have the same representation. On 64-bit systems for example the type `long int` and `long long int` also can coincide.
For example the output of this program running at www.ideone.com
```
#include
int main(void)
{
printf( "sizeof( long int ) = %zu\n", sizeof( long int ) );
printf( "sizeof( long long int ) = %zu\n", sizeof( long long int ) );
return 0;
}
```
is
```
sizeof( long int ) = 8
sizeof( long long int ) = 8
```
Upvotes: 0
|
2018/03/16
| 306 | 1,151 |
<issue_start>username_0: Module can be used from both side, how can i detect this from Module bootstrap file ([`yii\base\BootstrapInterface`](http://www.yiiframework.com/wiki/652/how-to-use-bootstrapinterface/#hh2))
Use `$app->id` is not good idea.<issue_comment>username_1: You can use this simple function:
`function getContext() {
return basename(Yii::getAlias('@app'));
}`
If you are running the advanced template it will return `'frontend'`, `'backend'` or `'console'`.
Upvotes: 2 <issue_comment>username_2: The problem lies in the understanding that the application does not devides only front \ back. They can be done more than two or three. For example, we had a project that consist of: front / admin / trade / management / central\_bank / public\_screen. How to be in this case?
Most likely you are thinking about the wrong module architecture.
A good solution seems like:
```
class MyModule extends Module{
public $name_space = "";
}
//and when you define you configs:
[
'myModule' => [
'class' => 'common\modules\MyModule',
'name_space'=>'application_group'
],
]
//also you can bootstrap it
```
Upvotes: 1
|
2018/03/16
| 343 | 1,357 |
<issue_start>username_0: I have got one table (Table1) with some columns filled by data and one empty column. There is another table (Table2) with one column with data. There is no foreign key or any link to that tables just row numbers are equal. I want update empty column of Table1 by data from column of Table2 row by row (row1 from table2 to row1 from table1. Is any way to do it but not using export to file? Is possible to do that using while loop?<issue_comment>username_1: You can use this simple function:
`function getContext() {
return basename(Yii::getAlias('@app'));
}`
If you are running the advanced template it will return `'frontend'`, `'backend'` or `'console'`.
Upvotes: 2 <issue_comment>username_2: The problem lies in the understanding that the application does not devides only front \ back. They can be done more than two or three. For example, we had a project that consist of: front / admin / trade / management / central\_bank / public\_screen. How to be in this case?
Most likely you are thinking about the wrong module architecture.
A good solution seems like:
```
class MyModule extends Module{
public $name_space = "";
}
//and when you define you configs:
[
'myModule' => [
'class' => 'common\modules\MyModule',
'name_space'=>'application_group'
],
]
//also you can bootstrap it
```
Upvotes: 1
|
2018/03/16
| 299 | 1,118 |
<issue_start>username_0: I need to know cron expression to run every monday between 1 and 1:30 am.
I have tried below expressions not worked.
```
1 * 1-2 ? * MON *
```
Can anyone help me to write cron expression?<issue_comment>username_1: You can use this simple function:
`function getContext() {
return basename(Yii::getAlias('@app'));
}`
If you are running the advanced template it will return `'frontend'`, `'backend'` or `'console'`.
Upvotes: 2 <issue_comment>username_2: The problem lies in the understanding that the application does not devides only front \ back. They can be done more than two or three. For example, we had a project that consist of: front / admin / trade / management / central\_bank / public\_screen. How to be in this case?
Most likely you are thinking about the wrong module architecture.
A good solution seems like:
```
class MyModule extends Module{
public $name_space = "";
}
//and when you define you configs:
[
'myModule' => [
'class' => 'common\modules\MyModule',
'name_space'=>'application_group'
],
]
//also you can bootstrap it
```
Upvotes: 1
|
2018/03/16
| 460 | 1,473 |
<issue_start>username_0: Is it possible to do formal verification with Chisel3 HDL language?
If yes, is there an open-source software to do that ?
I know that we can do verilog formal verification with Yosys, but with chisel ?<issue_comment>username_1: SpaceCowboy asked the same question [here](https://stackoverflow.com/questions/49800826/chisel-firrtl-verilog-backend-proof-of-work). And jkoening responded it: not now but maybe it will be done.
Upvotes: 2 <issue_comment>username_1: It's possible to use Yosys-smtbmc with some little hacks described [here](http://www.fabienm.eu/flf/prove-chisel-design-with-yosys-smtbmc/) to «inject» formal properties in Verilog generated.
Upvotes: 1 <issue_comment>username_1: There is a chisel package named [chisel-formal](https://github.com/tdb-alcorn/chisel-formal) now.
```
import chisel3.formal._
```
This extends Module with trait named Formal.
```
class MyModule extends Module with Formal {
//...
past(io.Mwrite, 1) (pMwrite => {
when(io.Mwrite === true.B) {
assert(pMwrite === false.B)
}
})
cover(countreg === 10.U)
//...
}
```
That allow to use assert(), assume(), cover(), past(), ... functions.
Full howto is given on [github repository](https://github.com/tdb-alcorn/chisel-formal).
Upvotes: 1 <issue_comment>username_1: formal verification is now integrated under [chiseltest](https://github.com/ucb-bar/chiseltest) official test library.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 652 | 2,403 |
<issue_start>username_0: What is good about using `[[maybe_unused]]`?
Consider
```
int winmain(int instance, int /*prevInstance*/, const char */*cmdline*/, int show);
int winmain(int instance, [[maybe_unused]] int prevInstance, [[maybe_unused]] const char *cmdline, int show);
```
Some might insist that using comments is ugly, because this keyword was made and intended to be used under these circumstances, and I totally agree with it, but the `maybe_unused` keywords seems a bit too long to me, making the code slightly harder to read.
I would like to follow the standard as "strictly" as I can, but is it worth using?<issue_comment>username_1: If the parameter is definitely unused, `[[maybe_unused]]` is not particularly useful, unnamed parameters and comments work just fine for that.
`[[maybe_unused]]` is mostly useful for things that are *potentially* unused, like in
```
void fun(int i, int j) {
assert(i < j);
// j not used here anymore
}
```
This can't be handled with unnamed parameters, but if `NDEBUG` is defined, will produce a warning because `j` is unused.
Similar situations can occur when a parameter is only used for (potentially disabled) logging.
Upvotes: 8 [selected_answer]<issue_comment>username_2: [username_1's answer](https://stackoverflow.com/a/49320892/817643) is the definitive and undisputed explanation. I just want to present another example, which doesn't require macros. Specifically, C++17 introduced the `constexpr if` construct. So you may see template code like this (bar the stupid functionality):
```
#include
template
auto add\_or\_double(T t1, T t2) noexcept {
if constexpr (std::is\_same\_v)
return t1 + t2;
else
return t1 \* 2.0;
}
int main(){
add\_or\_double(1, 2);
add\_or\_double(1.0, 2.0);
}
```
As of writing this, GCC 8.0.1 warns me about `t2` being unused when the else branch is the instantiated one. The attribute is indispensable in a case like this too.
Upvotes: 6 <issue_comment>username_3: I find [[maybe\_unused]] is useful when you have a set of constants that define a set configuration constants that may or may not be used depending on the configuration. You are then free to change the configuration without having to define new constants and worrying about unused constants.
I use this mainly in embedded code where you have to set specific values. Otherwise enumerations are generally better.
Upvotes: 1
|
2018/03/16
| 597 | 2,242 |
<issue_start>username_0: In java spring MVC Application I have a text file under resources folder,
What'is the most efficient way to read this file from a service class? Can I read this file if I deploy the application like a war on AWS?
```
Resource resource = new ClassPathResource(fileLocationInClasspath);
InputStream resourceInputStream = resource.getInputStream();
```
or
```
InputStream is = getClass().getResourceAsStream(fileLocationInClasspath);
```<issue_comment>username_1: If the parameter is definitely unused, `[[maybe_unused]]` is not particularly useful, unnamed parameters and comments work just fine for that.
`[[maybe_unused]]` is mostly useful for things that are *potentially* unused, like in
```
void fun(int i, int j) {
assert(i < j);
// j not used here anymore
}
```
This can't be handled with unnamed parameters, but if `NDEBUG` is defined, will produce a warning because `j` is unused.
Similar situations can occur when a parameter is only used for (potentially disabled) logging.
Upvotes: 8 [selected_answer]<issue_comment>username_2: [username_1's answer](https://stackoverflow.com/a/49320892/817643) is the definitive and undisputed explanation. I just want to present another example, which doesn't require macros. Specifically, C++17 introduced the `constexpr if` construct. So you may see template code like this (bar the stupid functionality):
```
#include
template
auto add\_or\_double(T t1, T t2) noexcept {
if constexpr (std::is\_same\_v)
return t1 + t2;
else
return t1 \* 2.0;
}
int main(){
add\_or\_double(1, 2);
add\_or\_double(1.0, 2.0);
}
```
As of writing this, GCC 8.0.1 warns me about `t2` being unused when the else branch is the instantiated one. The attribute is indispensable in a case like this too.
Upvotes: 6 <issue_comment>username_3: I find [[maybe\_unused]] is useful when you have a set of constants that define a set configuration constants that may or may not be used depending on the configuration. You are then free to change the configuration without having to define new constants and worrying about unused constants.
I use this mainly in embedded code where you have to set specific values. Otherwise enumerations are generally better.
Upvotes: 1
|
2018/03/16
| 647 | 2,262 |
<issue_start>username_0: <https://colab.research.google.com/notebooks/io.ipynb#scrollTo=KHeruhacFpSU>
In this notebook help it explains how to upload a file to drive and then download to Colaboratory but my files are already in drive.
Where can I find the file ID ?
```
# Download the file we just uploaded.
#
# Replace the assignment below with your file ID
# to download a different file.
#
# A file ID looks like: 1uBtlaggVyWshwcyP6kEI-y_W3P8D26sz
file_id = 'target_file_id'
```<issue_comment>username_1: If the parameter is definitely unused, `[[maybe_unused]]` is not particularly useful, unnamed parameters and comments work just fine for that.
`[[maybe_unused]]` is mostly useful for things that are *potentially* unused, like in
```
void fun(int i, int j) {
assert(i < j);
// j not used here anymore
}
```
This can't be handled with unnamed parameters, but if `NDEBUG` is defined, will produce a warning because `j` is unused.
Similar situations can occur when a parameter is only used for (potentially disabled) logging.
Upvotes: 8 [selected_answer]<issue_comment>username_2: [username_1's answer](https://stackoverflow.com/a/49320892/817643) is the definitive and undisputed explanation. I just want to present another example, which doesn't require macros. Specifically, C++17 introduced the `constexpr if` construct. So you may see template code like this (bar the stupid functionality):
```
#include
template
auto add\_or\_double(T t1, T t2) noexcept {
if constexpr (std::is\_same\_v)
return t1 + t2;
else
return t1 \* 2.0;
}
int main(){
add\_or\_double(1, 2);
add\_or\_double(1.0, 2.0);
}
```
As of writing this, GCC 8.0.1 warns me about `t2` being unused when the else branch is the instantiated one. The attribute is indispensable in a case like this too.
Upvotes: 6 <issue_comment>username_3: I find [[maybe\_unused]] is useful when you have a set of constants that define a set configuration constants that may or may not be used depending on the configuration. You are then free to change the configuration without having to define new constants and worrying about unused constants.
I use this mainly in embedded code where you have to set specific values. Otherwise enumerations are generally better.
Upvotes: 1
|
2018/03/16
| 2,731 | 10,677 |
<issue_start>username_0: EDIT:
I am trying to populate firebase recyclerview from firebase and it looks like everything is fine with reaching out for the data but when it comes to populate the textview it throws
```
java.lang.NullPointerException: Attempt to invoke virtual method 'void android.widget.TextView.setText(java.lang.CharSequence)
at com.example.cyrenians.cyreniansprototypenew.DummyFragment$1.onBindViewHolder(DummyFragment.java:64)
at com.example.cyrenians.cyreniansprototypenew.DummyFragment$1.onBindViewHolder(DummyFragment.java:55)
at com.firebase.ui.database.FirebaseRecyclerAdapter.onBindViewHolder(FirebaseRecyclerAdapter.java:118)
at android.support.v7.widget.RecyclerView$Adapter.onBindViewHolder(RecyclerView.java:6482)
at android.support.v7.widget.RecyclerView$Adapter.bindViewHolder(RecyclerView.java:6515)
at android.support.v7.widget.RecyclerView$Recycler.tryBindViewHolderByDeadline(RecyclerView.java:5458)
at android.support.v7.widget.RecyclerView$Recycler.tryGetViewHolderForPositionByDeadline(RecyclerView.java:5724)
at android.support.v7.widget.RecyclerView$Recycler.getViewForPosition(RecyclerView.java:5563)
at android.support.v7.widget.RecyclerView$Recycler.getViewForPosition(RecyclerView.java:5559)
at android.support.v7.widget.LinearLayoutManager$LayoutState.next(LinearLayoutManager.java:2229)
at android.support.v7.widget.LinearLayoutManager.layoutChunk(LinearLayoutManager.java:1556)
at android.support.v7.widget.LinearLayoutManager.fill(LinearLayoutManager.java:1516)
at android.support.v7.widget.LinearLayoutManager.onLayoutChildren(LinearLayoutManager.java:608)
at android.support.v7.widget.RecyclerView.dispatchLayoutStep2(RecyclerView.java:3693)
at android.support.v7.widget.RecyclerView.onMeasure(RecyclerView.java:3109)
at android.view.View.measure(View.java:22002)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6580)
at android.widget.LinearLayout.measureChildBeforeLayout(LinearLayout.java:1514)
at android.widget.LinearLayout.measureVertical(LinearLayout.java:806)
at android.widget.LinearLayout.onMeasure(LinearLayout.java:685)
at android.view.View.measure(View.java:22002)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6580)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at android.view.View.measure(View.java:22002)
at android.support.constraint.ConstraintLayout.internalMeasureChildren(ConstraintLayout.java:934)
at android.support.constraint.ConstraintLayout.onMeasure(ConstraintLayout.java:973)
at android.view.View.measure(View.java:22002)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6580)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at android.support.v7.widget.ContentFrameLayout.onMeasure(ContentFrameLayout.java:139)
at android.view.View.measure(View.java:22002)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6580)
at android.widget.LinearLayout.measureChildBeforeLayout(LinearLayout.java:1514)
at android.widget.LinearLayout.measureVertical(LinearLayout.java:806)
at android.widget.LinearLayout.onMeasure(LinearLayout.java:685)
at android.view.View.measure(View.java:22002)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6580)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at android.view.View.measure(View.java:22002)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6580)
at android.widget.LinearLayout.measureChildBeforeLayout(LinearLayout.java:1514)
at android.widget.LinearLayout.measureVertical(LinearLayout.java:806)
at android.widget.LinearLayout.onMeasure(LinearLayout.java:685)
at android.view.View.measure(View.java:22002)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6580)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at com.android.internal.policy.DecorView.onMeasure(DecorView.java:721)
at android.view.View.measure(View.java:22002)
at android.view.ViewRootImpl.performMeasure(ViewRootImpl.java:2410)
at android.view.ViewRootImpl.measureHierarchy(ViewRootImpl.java:1498)
E/AndroidRuntime: at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:1751)
at android.view.ViewRootImpl.doTraversal(ViewRootImpl.java:1386)
at android.view.ViewRootImpl$TraversalRunnable.run(ViewRootImpl.java:6733)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:911)
at android.view.Choreographer.doCallbacks(Choreographer.java:723)
at android.view.Choreographer.doFrame(Choreographer.java:658)
at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:897)
at android.os.Handler.handleCallback(Handler.java:789)
at android.os.Handler.dispatchMessage(Handler.java:98)
at android.os.Looper.loop(Looper.java:164)
at android.app.ActivityThread.main(ActivityThread.java:6541)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.Zygote$MethodAndArgsCaller.run(Zygote.java:240)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:767)
```
Here is my fragment class
```
public class DummyFragment extends android.support.v4.app.Fragment {
private FirebaseRecyclerAdapter mFirebaseAdapter;
private RecyclerView mProduceList = null;
public Context c;
private LinearLayoutManager manager;
public DummyFragment() {
}
public static DummyFragment newInstance() {
DummyFragment fragment = new DummyFragment();
return fragment;
}
@Nullable
@Override
public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, Bundle savedInstanceState) {
View itemView = inflater.inflate(R.layout.dummy\_fragment, container, false);
FirebaseApp.initializeApp(c);
mProduceList = itemView.findViewById(R.id.recyclerView);
mProduceList.setLayoutManager(new LinearLayoutManager(getActivity()));
DatabaseReference productRef = FirebaseDatabase.getInstance().getReference().child("Products");
Query productQuery = productRef.orderByKey();
FirebaseRecyclerOptions productOptions = new FirebaseRecyclerOptions.Builder().setQuery(productQuery, Product.class).build();
mFirebaseAdapter = new FirebaseRecyclerAdapter(productOptions) {
@Override
public ProductViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
LayoutInflater inflater = LayoutInflater.from(parent.getContext());
return new ProductViewHolder(inflater.inflate(R.layout.dummy\_fragment, parent, false));
}
@Override
protected void onBindViewHolder(ProductViewHolder viewHolder, int position, Product product) {
viewHolder.post\_name.setText(product.getTitle());
viewHolder.post\_type.setText(product.getDesc());
}
};
mProduceList.setAdapter(mFirebaseAdapter);
return itemView;
}
@Override
public void onStart() {
super.onStart();
mFirebaseAdapter.startListening();
}
@Override
public void onStop() {
super.onStop();
mFirebaseAdapter.stopListening();
}
public static class ProductViewHolder extends RecyclerView.ViewHolder {
View mView;
TextView post\_name;
TextView post\_type;
public ProductViewHolder(View itemView) {
super(itemView);
mView = itemView;
post\_name = (TextView) mView.findViewById(R.id.productTitle);
post\_type = (TextView) mView.findViewById(R.id.productDesc);
}
public void setTitle(String title) {
post\_name.setText(title);
}
public void setDesc(String desc) {
post\_type.setText(desc);
}
}
}
```
and here is my model class
```
public class Product {
@Exclude
private String Title;
@Exclude private String Desc;
int image;
public Product()
{
}
@Keep
public String getTitle() {
return Title;
}
@Keep
public void setTitle(String title){
Title = title;
}
public String getDesc(){
return Desc;
}
public void setDesc(String desc){
Desc = desc;
}
public int getImage(){
return image;
}
public void setImage(int image){
this.image=image;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Product model = (Product) o;
return (Title == null ? model.Title == null : Title.equals(model.Title))
&& (Desc == null ? model.Desc == null : Desc.equals(model.Desc));
}
@Override
public int hashCode() {
int result = Title == null ? 0 : Title.hashCode();
result = 31 * result + (Desc == null ? 0 : Desc.hashCode());
return result;
}
@Override
public String toString() {
return "Model{" +
"mTitle='" + Title + '\'' +
", mImage='" + Desc + '\'' +
'}';
}
}
```
I am not sure if my model class is set up properly and also if my viewholder is right. I am relatively new with Firebase and I am really struggling with this recyclerview so if anyone can provide a solution it will be massively appreciated.
Here are the layouts
```
```
and
```
xml version="1.0" encoding="utf-8"?
```<issue_comment>username_1: Change this:
```
viewHolder.post_name.setText(product.getTitle());
viewHolder.post_type.setText(product.getDesc());
```
to this:
```
viewHolder.setTitle(product.getTitle());
viewHolder.setDesc(product.getDesc());
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: I got this error when I use a dimen value in `dimens.xml` without `dp`. Example
```
50
```
It look strange because application still build successful and the error message look not relevant.
To fix problem, just change to
```
50dp
```
Hope it help anyone
Upvotes: 0
|
2018/03/16
| 449 | 1,802 |
<issue_start>username_0: I'm writing a webtest in Visual Studio 2015. The webtest I currently have allows me to run a static test.
I would like to spice things up and therefore add more realistic data. The data I want to use is stored in an Oracle Database 12c.
So I'm trying to add a new Data Source to the webtest. I enter the TNSName, Username and Password for which I would like to connect and test the connection. The connection can be established, but the list with tables I can choose from is empty.
Connecting to the same Database using the "Server Explorer" in Visual Studio 2015 works. And using this method I do get the full list of Tables contained in that Database. I can even query any of the tables.
So how can I fix my webtest to have access to a specific database table (row)?<issue_comment>username_1: If you can connect to the DB but you don't see the needed tables it should be a permission issue.
Do you use same credentials from "VS->Server Explorer" to connect to the DB?
If this is not the case, do you have more than one Oracle clients installed in your system? If yes, then most probably, the DataSource control uses the wrong client and the "Server Explorer" the correct one.
Upvotes: 2 <issue_comment>username_2: Are you using synonyms as proxies for your tables (e.g. for permission reasons)? synonyms will not show up when querying the list of tables that the user can access. They need be queried separately. When only the available tables are queried but not the vendor specific aliases this might lead to an empty list.
Upvotes: 1 <issue_comment>username_3: You need to install `ODAC` for `Visual Studio 2015` to view the database tables. Here is the link for it.
<http://www.oracle.com/technetwork/topics/dotnet/downloads/odacmsidownload-2745497.html>
Upvotes: -1
|
2018/03/16
| 1,084 | 3,984 |
<issue_start>username_0: I assume that an ES6 class is an object as "everything" is objects in JavaScript. Is that a correct assumption?<issue_comment>username_1: From the point of view of `Object Oriented Programming` *class* is **not** an *object*. It is an *abstraction*. And every object of that is a concrete instance of that abstraction.
From the point of view of JavaScript, *class* is an *object*, because *class* is a *ES6* feature and under it simple function is used. It is not just an abstraction in Javascript, but also an object by itself. And that function is an object. It has it's own properties and functions.
So speaking in Javascript not everything is an object. There are also primitive types - `number`, `string`, `boolean`, `undefined`, `symbol` from `ES6`. When you will use some methods with this primitive types except `undefined`, they will be converted into the objects.
You can see the below example.
```js
const str = 'Text';
const strObj = new String('Text');
console.log(str);
console.log(strObj.toString());
console.log(typeof str);
console.log(typeof strObj);
```
There is also one extra primitive type `null`, but checking it's type returns an *object*. This is a bug.
```js
console.log(typeof null);
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Yes, it is.
If you do:
```
class Hello { };
```
then `Hello` is in itself an object an can be used like any other:
```
console.log(Hello.toString()); // outputs 'class Hello { }'
```
As you say, *eveything* in JavaScript is an object
In fact:
```
console.log(Hello instanceof Object); // prints 'true'
```
Upvotes: 2 <issue_comment>username_3: ES6 class is a function, and any function is an object:
```
(class {}).constructor === Function
(class {}) instanceof Function === true
(class {}) instanceof Object === true
```
Although not every object is `Object` instance:
```
Object.create(null) instanceof Object === false
```
The statement that everything is an object is not correct. That's an oversimplified way to say that some primitive values in JavaScript can be *coerced* to objects:
```
(1).constructor === Number
1 instanceof Number === false
```
But primitives aren't objects until they are *converted*:
```
Object(1) instanceof Number === true
Object(1) instanceof Object === true
```
And `null` and `undefined` can't be coerced at all:
```
null instanceof Object === false
Object(null) instanceof Object === true // conversion
(null).constructor // coercion error
typeof null === 'object' // JS bug
```
Upvotes: 3 <issue_comment>username_4: The real question is whether Javascript classes are *classes*!
In reality they are not. Javascript's object model is prototype-based, rather than class-based, which makes it hard to understand. The `class` syntax is a useful tool, but it is really only syntactic sugar to make the model easier to understand.
In reality, a JS class is actually a function, with a prototype associated with it.
```
class Hippo{};
Hippo.__proto__ === Function.prototype; // true
```
All JS functions are also objects.
```
Hippo.__proto__.__proto__ === Object.prototype; // true
```
The prettier way to say this is
```
Hippo instanceof Object; // true
```
So a JS class is an object and a function -- but it isn't really a class!
Upvotes: 2 <issue_comment>username_5: Class is a function in Javascript, Only function has prototype property, not the object.
Check code below
```js
class MyClass {}
MyClass.prototype.value = 'My Class Value'; // Works same as function
const myclass = new MyClass();
console.log(myclass.value);
function MyFunc() {}
MyFunc.prototype.value = 'My Function Value';
const myfunc = new MyFunc();
console.log(myfunc.value);
var MyObject = {};
MyObject.prototype.value = 10; // Throws an error
```
Upvotes: 0 <issue_comment>username_6: This is a good question. From the OOP 001, Object is an instance of the class. So at no point can class be an object again
Upvotes: 0
|
2018/03/16
| 1,098 | 3,303 |
<issue_start>username_0: I am using google charts, which shows the stacked bar chart. I tried `groupwidth`, but help well increasing the width.
Any help is much appreciated.
```js
function drawChart() {
// Define the chart to be drawn.
var data = google.visualization.arrayToDataTable([
['name', 'data1', 'data2','data3'],
['w1', 40,20,40],
['w2', 40,20,40],
['w2', 40,20,40],
['w2', 40,20,40], ['w2', 40,20,40],
['w2', 40,20,40], ['w2', 40,20,40],
['w2', 40,20,40],
]);
var options = {title: 'Percentage of Risk', isStacked:true,bar: {width:"200%",height:"1200%",gap:"10%"}};
// Instantiate and draw the chart.
var chart = new google.visualization.BarChart(document.getElementById('container'));
chart.draw(data, options);
}
google.charts.setOnLoadCallback(drawChart);
```
```css
#container {
width: 550px; height: 1200px; margin: 0 auto
}
```
```html
Google Charts Tutorial
google.charts.load('current', {packages: ['corechart']});
```<issue_comment>username_1: To increase the width of a group of bars in Google Charts, you have to use the `groupWidth` configuration.
The syntax for this configuration is:
`bar: {groupWidth: '100%'}`
which is placed in the chart `options` variable.
So for your specific code it would be:
```
var options = {title: 'Percentage of Risk',
isStacked:true,
bar: {groupWidth: '100%'}};
```
The width of the group of bars can be specified as pixels, for example `75` or as a percentage such as `50%`. For the percentage data type, `100%` means no space between the bars.
Official documentation for charts configurations: <https://developers.google.com/chart/interactive/docs/gallery/columnchart#configuration-options>
Upvotes: 2 <issue_comment>username_2: If you want to increase width then you should try this, I have also put in my code:
```
google.charts.load('current', { 'packages': ['corechart'] });
google.charts.setOnLoadCallback(drawVisualization);
function drawVisualization() {
// Chart data
var data = google.visualization.arrayToDataTable([
['SALES', 'FINANCIAL YEAR'],
['FY-16',100],
['FY-17', 230],
['FY-18', 520],
['FY-19', 940],
['FY-20', 1400],
['FY-21', 2000]
]);
//Chart Custom Options
var options = {
title: 'SALES',
vAxis: { title: 'SALES', ticks: [0, 200, 400, 600, 800, 1000, 1200,1400,1600,1800,2000,2200] },
hAxis: { title: 'FINANCIAL YEAR' },
seriesType: 'bars',
colors: ['#f58a07'],
legend: { position: 'bottom', maxLines: 2, pagingTextStyle: { color: '#374a6f' }, scrollArrows: { activeColor: '#666', inactiveColor: '#ccc' } },
series: { 1: { pointSize: 2, type: 'line' } },
animation: {
duration: 1000,
easing: 'linear',
startup: true
}
};
var chart = new google.visualization.ComboChart(document.getElementById('chart\_div'));
chart.draw(data, options);
}
$(window).resize(function () {
drawVisualization();
});
`enter code here`
----------
----------
.chart {
width: 100%;
min-height: 500px;
}
@media(max-width: 667px) {
.chart {
width: 350px;
min-height: 400px;
}
}
```
Upvotes: 0
|
2018/03/16
| 963 | 3,156 |
<issue_start>username_0: I am new to Python & I am trying to learn how to XOR hex encoded ciphertexts against one another & then derive the ASCII value of this.
I have tried some of the functions as outlined in previous posts on this subject - such as bytearray.fromhex, binascii.unhexlify, decode("hex") and they have all generated different errors (obviously due to my lack of understanding). Some of these errors were due to my python version (python 3).
Let me give a simple example, say I have a hex encoded string ciphertext\_1 ("4A17") and a hex endoded string ciphertext\_2. I want to XOR these two strings and derive their ASCII value. The closest that I have come to a solution is with the following code:
```
result=hex(int(ciphertext_1, 16) ^ int(ciphertext_2, 16))
print(result)
```
This prints me a result of: 0xd07
(This is a hex string is my understanding??)
I then try to convert this to its ASCII value. At the moment, I am trying:
```
binascii.unhexliy(result)
```
However this gives me an error: "binascii.Error: Odd-length string"
I have tried the different functions as outlined above, as well as trying to solve this specific error (strip function gives another error) - however I have been unsuccessful. I realise my knowledge and understanding of the subject are lacking, so i am hoping someone might be able to advise me?
Full example:
```
#!/usr/bin/env python
import binascii
ciphertext_1="4A17"
ciphertext_2="4710"
result=hex(int(ciphertext_1, 16) ^ int(ciphertext_2, 16))
print(result)
print(binascii.unhexliy(result))
```<issue_comment>username_1: ```
from binascii import unhexlify
ciphertext_1 = "4A17"
ciphertext_2 = "4710"
xored = (int(ciphertext_1, 16) ^ int(ciphertext_2, 16))
# We format this integer: hex, no leading 0x, uppercase
string = format(xored, 'X')
# We pad it with an initial 0 if the length of the string is odd
if len(string) % 2:
string = '0' + string
# unexlify returns a bytes object, we decode it to obtain a string
print(unhexlify(string).decode())
#
# Not much appears, just a CR followed by a BELL
```
Or, if you prefer the `repr` of the string:
```
print(repr(unhexlify(string).decode()))
# '\r\x07'
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: When doing byte-wise operations like XOR, it's often easier to work with `bytes` objects (since the individual bytes are treated as integers). From [this question](https://stackoverflow.com/questions/5649407/hexadecimal-string-to-byte-array-in-python), then, we get:
```
ciphertext_1 = bytes.fromhex("4A17")
ciphertext_2 = bytes.fromhex("4710")
```
XORing the bytes can then be accomplished as in [this question](https://stackoverflow.com/questions/29408173/byte-operations-xor-in-python), with a comprehension. Then you can convert that to a string:
```
result = [c1 ^ c2 for (c1, c2) in zip(ciphertext_1, ciphertext_2)]
result = ''.join(chr(c) for c in result)
```
I would probably take a slightly different angle and create a `bytes` object instead of a list, which can be decoded into your string:
```
result = bytes(b1 ^ b2 for (b1, b2) in zip(ciphertext_1, ciphertext_2)).decode()
```
Upvotes: 0
|
2018/03/16
| 658 | 2,272 |
<issue_start>username_0: Have been searching for the solution to my problem now already for a while and have been playing around regex101.com for a while but cannot find a solution.
The problem I am facing is that I have to make a string select for different inputs, thus I wanted to do this with Regular expressions to get the wanted data from these strings.
The regular expression will come from a configuration for each string seperately. (since they differ)
The string below is gained with a XPath: `//body/div/table/tbody/tr/td/p[5]` but I cannot dig any lower into this anymore to retrieve the right data or can I ?
The string I am using at the moment as example is the following:
```
**Kontaktdaten des Absenders:**
**Name:** Wanted data
**Telefon:**
[XXXXXXXXX](tel:XXXXXXXXX)
```
From this string I am trying to get the "Wanted data"
My regular expression so far is the following:
```
(?<=<\/strong> )(.*)(?=
)
```
But this returns the whole:
```
**Name:** Wanted data
**Telefon:** [XXXXXXXXX](tel:XXXXXXXXX)
```
I thought I could solve this with a repeat group
```
((:?(?<=<\/strong> )(.*)(?=
))+)
```
But this returns the same output as without the repeat group.
I know I could build a for { } loop around this regex to gain the same output, but since this is the only regular expression I have to do this for (but means I have to change it for all the other data) I was wondering if it is possible to do this in a regular expression.
Thank you for the support already so far.<issue_comment>username_1: [**Regex is the wrong tool for parsing markup.**](https://stackoverflow.com/q/6751105/290085) You have a proper XML parsing tool, XPath, in hand. Finish the job with it:
This XPath,
```
strong[.='Name:']/following-sibling::text()[1]
```
when appended to your original XPath,
```
//body/div/table/tbody/tr/td/p[5]/strong[.='Name:']/following-sibling::text()[1]
```
will finish the job of selecting the text node immediately following the `**Name:**` label, as requested, with no regex hacks over markup required.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can try to match everything but tag markers:
```
(?<=<\/strong> )([^<>]*)(?=
)
```
[Demo](https://regex101.com/r/yOJIxA/2/)
Upvotes: -1
|
2018/03/16
| 457 | 1,377 |
<issue_start>username_0: What happens to the object returned from the last line of following code
```
class weight
{
int kilogram;
int gram;
public:
void getdata ();
void putdata ();
void sum_weight (weight,weight) ;
weight sum_weight (weight) ;
};
weight weight :: sum_weight(weight w2)
{
weight temp;
temp.gram = gram + w2.gram;
temp.kilogram=temp.gram/1000;
temp.gram=temp.gram%1000;
temp.kilogram+=kilogram+w2.kilogram;
return(temp);
}
int main(){
//.....//
w3=w2.sum_weight(w1);
w2.sum_weight(w1);
//.....//
}
```
Does it remains in the memory till completion or it gets deleted.<issue_comment>username_1: [**Regex is the wrong tool for parsing markup.**](https://stackoverflow.com/q/6751105/290085) You have a proper XML parsing tool, XPath, in hand. Finish the job with it:
This XPath,
```
strong[.='Name:']/following-sibling::text()[1]
```
when appended to your original XPath,
```
//body/div/table/tbody/tr/td/p[5]/strong[.='Name:']/following-sibling::text()[1]
```
will finish the job of selecting the text node immediately following the `**Name:**` label, as requested, with no regex hacks over markup required.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can try to match everything but tag markers:
```
(?<=<\/strong> )([^<>]*)(?=
)
```
[Demo](https://regex101.com/r/yOJIxA/2/)
Upvotes: -1
|
2018/03/16
| 146 | 596 |
<issue_start>username_0: I'm assuming if I log into a project and there are User Stories then the project was created using the Agile template and if there are Product Backlog Items and Bugs then the project was created using the Scrum template. Is this correct?<issue_comment>username_1: Yes, this is correct. Also if you see Requirements then project was created with CMMI template.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Yes, you can use the workitem types to check for your process template.
CMMI is Requirement
Scrum is Prodcut backlog item
Agile is User Story
Upvotes: 2
|
2018/03/16
| 1,820 | 6,068 |
<issue_start>username_0: I have a text file containing several function blocks and some of them are duplicates. I want to create a new file which contains only unique Function blocks. e.g.
input.txt (I have updated the example):
```
Func (a1,b1) abc1
{
xyz1;
{
xy1;
}
xy1;
}
Func (a2,b2) abc2
{
xyz2;
{
xy2;
rst2;
}
xy2;
}
Func (a1,b1) abc1
{
xyz1;
{
xy1;
}
xy1;
}
Func (a3,b3) abc3
{
xyz3;
{
xy3;
rst3;
def3;
}
xy3;
}
Func (a1,b1) abc1
{
xyz1;
{
xy1;
}
xy1;
}
```
And want to have output.txt as:
```
Func (a1,b1) abc1
{
xyz1;
{
xy1;
}
xy1;
}
Func (a2,b2) abc2
{
xyz2;
{
xy2;
rst2;
}
xy2;
}
Func (a3,b3) abc3
{
xyz3;
{
xy3;
rst3;
def3;
}
xy3;
}
```
I found one solution using `awk` to remove duplicate line, something like:
```
$ awk '!a[$0]++' input.txt > output.txt
```
But the issue is that the above solution matches only single line not a text block. I wanted to combine this `awk` solution with the regex to match a single function block: `'/^FUNC(.|\n)*?\n}/'`
But I was not able to do that. Any suggestion/solution would be very helpful.<issue_comment>username_1: if your code blocks are separated with empty lines, you can define record separator (and output record separator)...
```
$ awk -v RS= -v ORS='\n\n' '!a[$0]++' input.txt > output.txt
```
**NB.** Works on toy examples, however this is fragile since any empty line in the code block will break the logic. Similarly you can't depend on curly brace since it may appear in the code block as well.
**UPDATE**
For the updated input this may work better
```
$ awk -v ORS='\n\n' '{record=($1~/^Func/)?$0:record RS $0}
/^}/ && !a[record]++{print record} '
```
here we define the record that starts with a "Func" keyword and end with a curly brace on the first position. Accumulate the lines for the record and print with done. Set the ORS to have empty lines between records.
Upvotes: 2 <issue_comment>username_2: As OP changed the requirement and examples so I have re-written the code, could you please try and let me know if this helps you(reading Input\_file 2 times here).
```
awk 'FNR==NR && /Func/ && !a[$0]++{gsub(/^ +/,"");!b[$0]++;next} FNR!=NR && /Func/{flag=($0 in b)?1:"";delete b[$0]} flag' Input_file Input_file
```
Adding a non-one liner solution for solution too now.
```
awk '
FNR==NR && /Func/ && !a[$0]++{
gsub(/^ +/,"");
!b[$0]++;
next}
FNR!=NR && /Func/{
flag=($0 in b)?1:"";
delete b[$0]}
flag
' Input_file Input_file
```
Upvotes: 1 <issue_comment>username_3: adapt this code for your real purpose (don't know the exact protocol and format of the language in sample). Code is self commented
```
awk '
# at every new function
/^Func[[:space:]]*[(]/ {
# print last function if keeped
if ( Keep ) print Code
# new function name
Name=$NF
# define to keep or avoid (keep if not yet in the list)
Keep = ! ( Name in List)
# put fct name in list
List[ Name ]
# clean code in memory
Code = ""
}
# at each line, load the line into the code
# if code is not empty, add old code + new line
{ Code = ( Code ? Code "\n" : "" ) $0 }
# at the end, print last code if needed
END { if ( Keep ) print Code }
' sample.txt
```
Upvotes: 1 <issue_comment>username_4: ```
$ awk '$1=="Func"{ f=!seen[$NF]++ } f' file
Func (a1,b1) abc1
{
xyz1;
{
xy1;
}
xy1;
}
Func (a2,b2) abc2
{
xyz2;
{
xy2;
rst2;
}
xy2;
}
Func (a3,b3) abc3
{
xyz3;
{
xy3;
rst3;
def3;
}
xy3;
}
```
The above just assumes that every Func definition is on it's own line and that line ends with the function name.
All it does is look for a "Func" line and then set a flag `f` to true if this is the first time we've seen the function name at the end of the line and false otherwise (using the common awk idiom `!seen[$NF]++` which you were already using in your question but named your array `a[]`). Then it prints the current line if `f` is true (i.e. you're following the Func definition of a previously unseen function name) and skips it otherwise (i.e. you're following the Func definition of a function name that had been seen previously).
Upvotes: 3 <issue_comment>username_5: Thanks to all for their solutions. They were correct as per the example I posted, but my actual task was a bit more generic. I found a generic solution in Python, as the above mentioned response did not work perfectly(may be because my knowledge in bash is limited).
My generic solution using Pythons is as follows:
```
import re
import os
testFolder = "./Path"
#Usage: Remove duplicate function block from one or more .txt files available in testFolder
#Iterating through the list of all the files available
for testFilePath in os.listdir(testFolder):
if testFilePath.endswith(".txt"):
#Extracting path for each text file found
inputFile = open (testFolder + "/" + testFilePath, "r")
#Creating a reduced folder in the output path
outputPath = testFolder + "/Reduced"
if not os.path.exists(outputPath):
os.makedirs(outputPath)
outputFile = open (outputPath + "/" + testFilePath, "w")
#Reading all the content into a single string
fileContent = inputFile.read()
#Pattern for matching a Function block. Pattern matches multiple lines
pattern = re.compile('(^FUNC(.|\n)*?\n})*',re.M)
# Creating a list of function blocks
funcList = pattern.findall(fileContent)
#Creating a list of unique function block, thus removing duplicate data
uniqueFuncList = set(funcList)
#Writing each Function block to the output file separeted by a new line
for element in uniqueFuncList:
outputFile.write(element[0] + "\n\n")
inputFile.close()
outputFile.close()
```
Upvotes: 1 [selected_answer]
|
2018/03/16
| 644 | 2,389 |
<issue_start>username_0: i am having route where i pass id,but i dont want to show id in url,
```
``
```
this gets converted in url <https://local..../invite-members/5>,
but instead of that i want <https://local..../invite-members>, but the functionality should remain the same as in i get id in invite-members through `this.props.match.params.groupID` should be as it is,please help
using react router `"react-router-dom": "^4.2.2",`<issue_comment>username_1: Passing data in the `params` object will *always* result in that data being shown in the URL. Because the `params` object is built from the url.
Upvotes: 0 <issue_comment>username_2: If you want to change url to '/invite-members', you can add the Redirect component. And in case you want to save groupId, you could save it to your component state:
```js
import React, { PureComponent } from "react";
import PropTypes from "prop-types";
import {
Router,
Route,
Link,
Switch,
Redirect
} from "react-router-dom";
class Root extends PureComponent {
// add groupId field to your component
// In case you use redux or any another state management library, you can save groupId to store
state = { groupId: null };
render() {
const { store, history } = this.props;
// just for example I defined '/' path to redirect on /invite-members url
return (
(
)}
/>
(
)}
/>
{
return (
{
this.setState({ groupId });
}}
/>
);
}}
/>
);
}
}
export default Root;
class RedirectAndSaveGroupId extends PureComponent {
componentDidMount() {
// save groupId to Root component
this.props.onSetGroupId(this.props.groupId);
}
render() {
// redirect to /invite-members without groupId
return (
);
}
}
// Just for demo. In this.props.groupId we can receive groupId
class InviteMembers extends PureComponent {
render() {
return this.props.groupId;
}
}
```
Note, that in case you using any state management library such as Redux, you can store group id in them
Upvotes: 1 <issue_comment>username_3: I maybe have a very simple solution :
Router link :
```
{name}
```
In the Targeted file :
```
state = this.props.location.state
QueryParameters = () => {
const id = this.state.id
return { id }
}
```
And launch your query requiring the ID. It does not appear in the url.
Upvotes: 1
|
2018/03/16
| 651 | 2,475 |
<issue_start>username_0: Hi i want to know that how to run angular application permanently. means if i run the npm start command on local server. then after it gives a url with port like **localhost:4200**. but when i close the terminal then my project not run.
Same thing happen with me on live server when i run my command through putty.
**ng serve --host {ipadddress}**. then it gives me a URL like **mysiteip:4200**. but when i close putty my site not run. i want to know that how to run angular application permanently.<issue_comment>username_1: Passing data in the `params` object will *always* result in that data being shown in the URL. Because the `params` object is built from the url.
Upvotes: 0 <issue_comment>username_2: If you want to change url to '/invite-members', you can add the Redirect component. And in case you want to save groupId, you could save it to your component state:
```js
import React, { PureComponent } from "react";
import PropTypes from "prop-types";
import {
Router,
Route,
Link,
Switch,
Redirect
} from "react-router-dom";
class Root extends PureComponent {
// add groupId field to your component
// In case you use redux or any another state management library, you can save groupId to store
state = { groupId: null };
render() {
const { store, history } = this.props;
// just for example I defined '/' path to redirect on /invite-members url
return (
(
)}
/>
(
)}
/>
{
return (
{
this.setState({ groupId });
}}
/>
);
}}
/>
);
}
}
export default Root;
class RedirectAndSaveGroupId extends PureComponent {
componentDidMount() {
// save groupId to Root component
this.props.onSetGroupId(this.props.groupId);
}
render() {
// redirect to /invite-members without groupId
return (
);
}
}
// Just for demo. In this.props.groupId we can receive groupId
class InviteMembers extends PureComponent {
render() {
return this.props.groupId;
}
}
```
Note, that in case you using any state management library such as Redux, you can store group id in them
Upvotes: 1 <issue_comment>username_3: I maybe have a very simple solution :
Router link :
```
{name}
```
In the Targeted file :
```
state = this.props.location.state
QueryParameters = () => {
const id = this.state.id
return { id }
}
```
And launch your query requiring the ID. It does not appear in the url.
Upvotes: 1
|
2018/03/16
| 752 | 2,890 |
<issue_start>username_0: I am using opentok to stream a video file `MediaStreamTrack`
Sample code:
```
const mediaStream = $('#myPreview')[0].captureStream()
OT.initPublisher(el, {
style: {
nameDisplayMode: "on",
buttonDisplayMode: "off",
},
// resolution: "320x240",
// frameRate: 20,
audioSource: mediaStream.getAudioTracks()[0],
videoSource: mediaStream.getVideoTracks()[0],
});
```
It works fine but the problem is when viewers are around 20 the quality of video is pixelated.
More context:
I upload a video and use the blob to play the video in the preview.
I'm open for better solutions:
a. Upload video to server and stream edited moderate quality(not sure what to do) video in `myPreview`.
b. Upload video to 3rd party service and load video via their player.
Thanks in advance.<issue_comment>username_1: Passing data in the `params` object will *always* result in that data being shown in the URL. Because the `params` object is built from the url.
Upvotes: 0 <issue_comment>username_2: If you want to change url to '/invite-members', you can add the Redirect component. And in case you want to save groupId, you could save it to your component state:
```js
import React, { PureComponent } from "react";
import PropTypes from "prop-types";
import {
Router,
Route,
Link,
Switch,
Redirect
} from "react-router-dom";
class Root extends PureComponent {
// add groupId field to your component
// In case you use redux or any another state management library, you can save groupId to store
state = { groupId: null };
render() {
const { store, history } = this.props;
// just for example I defined '/' path to redirect on /invite-members url
return (
(
)}
/>
(
)}
/>
{
return (
{
this.setState({ groupId });
}}
/>
);
}}
/>
);
}
}
export default Root;
class RedirectAndSaveGroupId extends PureComponent {
componentDidMount() {
// save groupId to Root component
this.props.onSetGroupId(this.props.groupId);
}
render() {
// redirect to /invite-members without groupId
return (
);
}
}
// Just for demo. In this.props.groupId we can receive groupId
class InviteMembers extends PureComponent {
render() {
return this.props.groupId;
}
}
```
Note, that in case you using any state management library such as Redux, you can store group id in them
Upvotes: 1 <issue_comment>username_3: I maybe have a very simple solution :
Router link :
```
{name}
```
In the Targeted file :
```
state = this.props.location.state
QueryParameters = () => {
const id = this.state.id
return { id }
}
```
And launch your query requiring the ID. It does not appear in the url.
Upvotes: 1
|
2018/03/16
| 1,193 | 4,681 |
<issue_start>username_0: I was testing some commands and I ran
```
$ kubectl delete nodes --all
```
and it deletes de-registers all the nodes including the masters. Now I can't connect to the cluster (Well, Obviously as the master is deleted).
Is there a way to prevent this as anyone could accidentally do this?
Extra Info: I am using KOps for deployment.
P.S. It does not delete the EC2 instances and the nodes come up on doing a EC2 instance reboot on all the instances.<issue_comment>username_1: By default, you using something like a superuser who can do anything he want with a cluster.
For limit access to a cluster for other users you can use [RBAC](https://kubernetes.io/docs/admin/authorization/rbac) authorization for. By RBAC rules you can manage access and limits per resource and action.
In few words, for do that you need to:
1. Create new cluster by Kops with `--authorization RBAC` or modify existing one by adding 'rbac' option to cluster's configuration to 'authorization' section:
`authorization:
rbac: {}`
2. Now, we can follow [that](https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/) instruction from Bitnami for create a user. For example, let's creating a user which has access only to `office` namespace and only for a few actions. So, we need to create a namespace firs:
`kubectl create namespace office`
3. Create a key and certificates for new user:
`openssl genrsa -out employee.key 2048`
`openssl req -new -key employee.key -out employee.csr -subj "/CN=employee/O=bitnami"`
4. Now, using your CA authority key (It available in the S3 bucket under PKI) we need to approve new certificate:
`openssl x509 -req -in employee.csr -CA CA_LOCATION/ca.crt -CAkey CA_LOCATION/ca.key -CAcreateserial -out employee.crt -days 500`
5. Creating credentials:
`kubectl config set-credentials employee --client-certificate=/home/employee/.certs/employee.crt --client-key=/home/employee/.certs/employee.key`
6. Setting a right context:
`kubectl config set-context employee-context --cluster=YOUR_CLUSTER_NAME --namespace=office --user=employee`
7. New we have a user without access to anything. Let's create a new role with limited access, here is example of Role which will have access only to deployments, replicasets and pods for create, delete and modify them and nothing more. Create file `role-deployment-manager.yaml` with Role configuration:
`kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: office
name: deployment-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]`
8. Create a new file `rolebinding-deployment-manager.yaml` with Rolebinding, which will attach your Role to user:
`kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: deployment-manager-binding
namespace: office
subjects:
- kind: User
name: employee
apiGroup: ""
roleRef:
kind: Role
name: deployment-manager
apiGroup: ""`
9. Now apply that configurations:
`kubectl create -f role-deployment-manager.yaml
kubectl create -f rolebinding-deployment-manager.yaml`
So, now you have a user with limited access and he cannot destroy your cluster.
Upvotes: 3 [selected_answer]<issue_comment>username_2: username_1 describes a good way of preventing what you've described. Below I give details of how you can ensure the apiserver remains accessible even if someone does accidentally delete all the node objects:
Losing connectivity to the apiserver by deleting node objects will only happen if the components necessary for connecting to the apiserver (e.g. the apisever itself and etcd) are managed by a component (i.e. the kubelet) that depends on the apiserver being up (GKE for example can scale down to 0 worker nodes, leaving no node objects, but the apiserver will still be accessible).
As a specific example, my personal cluster has a single master node with all the control plane components described as static Pod manifests and placed in the directory referred to by the `--pod-manifest-path` flag on the kubelet on that master node. Deleting all the node objects as you did in the question caused all my workloads to go into a pending state but the apiserver was still accessible in this case because the control plane components are run regardless of whether the kubelet can access the apiserver.
Common ways to prevent what you've just described is to run the apiserver and etcd as static manifests managed by the kubelet as I just described or to run them independently of any kubelet, perhaps as systemd units.
Upvotes: 1
|
2018/03/16
| 313 | 1,143 |
<issue_start>username_0: I have a problem using Twig in Pyrocms. I am trying to echo a variable within a shorthand if statement in Twig.
```
style="background-image: {{ (not link.bgcolor is empty ? 'linear-gradient(transparent, {{link.bgcolor}}),' : '')|raw }} url('{{link.image.url()}}');"
```
The statement is correct, but the displayed value actually is `linear-gradient(transparent, {{link.bgcolor}}),` the `{{link.bgcolor}}` is not getting parsed by Twig. How can I use `{{}}` tags within another `{{}}` tags?<issue_comment>username_1: You have to concat that output,
```
{{ not link.bgcolor is empty ? 'linear-gradient(transparent, '~link.bgcolor~'),' : '' }}
```
Upvotes: 1 <issue_comment>username_2: You're already in Twig-context, since you've opened it with `{{`. So you can refer to variables without adding an additional `{{ ... }}`. You just need to get out of the string context and concatenate the variable with the concatenation operator `~`. It should then look something like this:
```
{{ (not link.bgcolor is empty ? 'linear-gradient(transparent, ' ~ link.bgcolor ~ '),' : '')|raw }}
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 527 | 1,767 |
<issue_start>username_0: i have modal popup in loop with database values. In popup there is a reply textarea and i want to post that value using ajax. But i am getting empty value of textarea when i alert value on click event using jquery. Here is my code
CODE:
```
php
$sn=1;$rep=1;
foreach($view_queries as $q) {?
[Reply](#)
?>
×
#### Reply To Query
Query:
php echo ucfirst($q-query);?>
Reply:
Reply
php $sn++; $rep++; }?
```
HERE IS THE SCRIPT FOR CHECKING VALUE IN ALERT:
```
$(document).ready(function(){
$(document).on("click",".form\_click",function(event){
event.preventDefault();
var a=$('#' + $(this).data('target')).text();
alert(a);
});
});
```<issue_comment>username_1: You should use .val() instead of .text(). text() get's the HTML contents of a HTML element. val() is used for getting contents of HTML inputs.
Also as <NAME> already stated, id's should be unique.
You could give the fields an id with and index set in PHP.
So in every loop you increment the index and add that to your id values, like so:
id=`"=$index?_mesg"`.
And in your button do:
`Reply`
And then in your jquery you can do var a=$('#' + $(this).data('target')).val();
See the added data-target in your button. And the $(this).data('target') in the jquery for retrieving the value of the target data attribute.
Upvotes: 2 [selected_answer]<issue_comment>username_2: ```
$(document).ready(function(){
$(document).on("click",".form\_click",function(event){
event.preventDefault();
var a= $(this).parents("form.send\_reply").find("textarea.mesg").text();
alert(a);
});
});
```
change `id="mesg"` to `class="mesg"` in your loop.Also make sure you don't repeat the same ID in your loop. Change `id="send_reply"` to `class="send_reply"`
Upvotes: 0
|
2018/03/16
| 887 | 2,532 |
<issue_start>username_0: I have to match numbers between 1 and 33689 and after this it should come A, AB or ABC.
I have now %1, `^[0-9\.]{0,5}.*[A-z]+$` and I am working with Oracle Business Intelligence. So, it is not a direct database query. With MySQL I have made with `^[a-zA-Z0-9_]{1,5}([^[:digit:]]|$)` and I have made an order by with
```
regexp_substr(replace(number,'.',''),'^(\d+)',1,1,NULL,1)
```
and it works very well.
But I cannot make it 1 to 1 in the Oracle Business Intelligence.<issue_comment>username_1: Your problems appear to be related to Oracle Business Intelligence (OBIEE) more than regexes. The basic regex of 1-5 digits followed by 1-3 letters is achieved with `^[0-9]{1-5}[A-Z]{1-3}$`. If you want to avoid prefix zeros and [match a specific numeric range](https://www.regular-expressions.info/numericranges.html), the following matches the range 0-33689:
```
0|[1-9][0-9]{0-3}|[12][0-9]{4}|3[0-2][0-9]{3}|33[0-5][0-9]{2}|336[0-7][0-9]|3368[0-9]
```
And 1-3 'A's is quite trivially `A{1-3}`.
As the blog post [Regular Expressions in OBIEE](https://www.rittmanmead.com/blog/2009/12/regular-expressions-in-obiee/) suggests,
>
> The only issue here is that OBIEE does not support regular expressions in its SQL language, so I have to use the EVALAUTE command to pass Oracle's regular expression syntax back through to the database.
>
>
>
It continues to give an example of this:
```
Evaluate('regexp_substr(%1,''regex_goes_here''), "FIELD_NAME")'
```
Since you didn't provide the context in which you're executing this regular expression, it is entirely unclear if you're failing to describe a proper pattern, if *Oracle's regular expression syntax* lacks some of the features you're using (I can't claim to know this dialect), but e.g. named character groups (`\d`) and character ranges (`{1-3}`) are not supported by all regex engines.
Perhaps your problem lies with calling `Evaluate` or `regexp_substr`.
Upvotes: 1 <issue_comment>username_2: Alternative solution:
match any 5-digit that is followed by A AA or AAA
```
/^[0-9]{5}(A?){3}$/
```
and parse any found match to integer
```
Pseudocode:
var tmp = parse_to_int(match)
```
compare the result with 33689
```
Pseudocode:
if(tmp <= 33689)
{
//proceed as you wish
}
```
Upvotes: 0 <issue_comment>username_3: You could try this for the range ending in A, AA or AAA:
[`^(?:336[0-8][0-9]|33[0-5][0-9]{2}|3[0-2][0-9]{3}|[12][0-9]{4}|[1-9][0-9]{1,3}|[1-9])A{1,3}$`](https://regex101.com/r/ZxIjAf/1/)
Upvotes: 0
|
2018/03/16
| 813 | 2,370 |
<issue_start>username_0: I try to use `hjkl` for navigation in VS code, but I get thrown off whenever a code completion drop down appears as I must use the arrow keys to navigate those. Is there any way around this? How do you do?
[](https://i.stack.imgur.com/9K0Rk.png)<issue_comment>username_1: Your problems appear to be related to Oracle Business Intelligence (OBIEE) more than regexes. The basic regex of 1-5 digits followed by 1-3 letters is achieved with `^[0-9]{1-5}[A-Z]{1-3}$`. If you want to avoid prefix zeros and [match a specific numeric range](https://www.regular-expressions.info/numericranges.html), the following matches the range 0-33689:
```
0|[1-9][0-9]{0-3}|[12][0-9]{4}|3[0-2][0-9]{3}|33[0-5][0-9]{2}|336[0-7][0-9]|3368[0-9]
```
And 1-3 'A's is quite trivially `A{1-3}`.
As the blog post [Regular Expressions in OBIEE](https://www.rittmanmead.com/blog/2009/12/regular-expressions-in-obiee/) suggests,
>
> The only issue here is that OBIEE does not support regular expressions in its SQL language, so I have to use the EVALAUTE command to pass Oracle's regular expression syntax back through to the database.
>
>
>
It continues to give an example of this:
```
Evaluate('regexp_substr(%1,''regex_goes_here''), "FIELD_NAME")'
```
Since you didn't provide the context in which you're executing this regular expression, it is entirely unclear if you're failing to describe a proper pattern, if *Oracle's regular expression syntax* lacks some of the features you're using (I can't claim to know this dialect), but e.g. named character groups (`\d`) and character ranges (`{1-3}`) are not supported by all regex engines.
Perhaps your problem lies with calling `Evaluate` or `regexp_substr`.
Upvotes: 1 <issue_comment>username_2: Alternative solution:
match any 5-digit that is followed by A AA or AAA
```
/^[0-9]{5}(A?){3}$/
```
and parse any found match to integer
```
Pseudocode:
var tmp = parse_to_int(match)
```
compare the result with 33689
```
Pseudocode:
if(tmp <= 33689)
{
//proceed as you wish
}
```
Upvotes: 0 <issue_comment>username_3: You could try this for the range ending in A, AA or AAA:
[`^(?:336[0-8][0-9]|33[0-5][0-9]{2}|3[0-2][0-9]{3}|[12][0-9]{4}|[1-9][0-9]{1,3}|[1-9])A{1,3}$`](https://regex101.com/r/ZxIjAf/1/)
Upvotes: 0
|
2018/03/16
| 754 | 2,629 |
<issue_start>username_0: I am using `jira-ruby` Gem.
```
require 'jira-ruby'
options = {
:username => 'xxxxxxxx',
:password => '********',
:site => 'https://xxx.yyyy.com',
:context_path => '',
:auth_type => :basic,
:use_ssl => true
}
client = JIRA::Client.new(options)
project = client.Project.find('P-NAME')
project.issues.each do |issue|
puts "#{issue.id} - #{issue.summary}"
end
```
Here instead of passing the username and password, i want to pass API Token. How can i do that?????
Normal curl command which is working fine is :
```
curl -X GET -H "Authorization: Basic " "https:///rest/api/2/issue/"
```<issue_comment>username_1: I found the documentation really unclear in this aspect. Here is what worked for me so far:
```
options = {
:site => 'https://my.jira.site',
:context_path => '/my_jira_path',
:auth_type => :oauth,
:consumer_key => 'jira_consumer_key_name',
:consumer_secret => 'jira_consumer_key_secret',
:access_token => 'jira_oauth_access_token',
:access_secret => 'jira_oauth_access_secret',
:private_key_file => 'path/to/private_key_file',
}
```
In my case I had manually pre-authorized my app with Jira via oauth authentication, because this is a script that doesn't enable for a callback to obtain the token auth back, therefore I used the `access_token` and `access_secret` obtained this way.
After creating the client with `JIRA::Client.new(options)` I had to also set the token manually (this might not be necessary if you are able to get the callback, but I haven't explored that way):
```
client.set_access_token(options[:access_token],options[:access_secret])
```
Hope this works for you too.
Upvotes: 0 <issue_comment>username_2: There is no direct solution I found for this one. Instead I've followed **REST** approach. It will be simple. Atlasssian has good [api documentation](https://developer.atlassian.com/cloud/jira/platform/rest).
```
request_payload = {
body: body, # Body as JSON
query: params, # URL Parameters
headers: {
'Authorization' => "Basic #{auth_token}",
'Content-Type' => 'application/json'
}
}
response = HTTParty.send(:post, url, request_payload)
puts response
```
Please replace `#{auth_token}` with your **API** key. Hope this will help somebody.
Upvotes: 1 [selected_answer]<issue_comment>username_3: So the GEM constructs the basic auth headers. All you need to do is use your Jira email as :username and then the API token as :password. Then it will authenticate.
The Oauth way doesn't work with Jira Cloud REST API. Only Jira Server is compatible.
Upvotes: 1
|
2018/03/16
| 598 | 1,956 |
<issue_start>username_0: I wanted to output some scores in a rank from MySQL database with php. After my select query, I try echoing the value but all I get is RESOURCE ID 10#. This is my code:
```
$rank = mysql_query("SELECT 1+COUNT(*) FROM `class_ranking` WHERE overall_scores > (SELECT overall_scores FROM `class_ranking` WHERE ref_id='$id')");
While ($rk = mysql_fetch_array($rank)){
echo $rk['overall_scores'];
}
```
Please I need your assistance. Thanks.<issue_comment>username_1: Set an alias for the count:
`SELECT 1+COUNT(*) AS overall_scores`
Upvotes: 2 <issue_comment>username_2: write this way `1+COUNT(*) as overall_scores`
```
$rank = mysql_query("SELECT 1+COUNT(*) as overall_scores FROM `class_ranking` WHERE overall_scores > (SELECT overall_scores FROM `class_ranking` WHERE ref_id='$id')");
While ($rk = mysql_fetch_array($rank)){
echo $rk['overall_scores'];
}
```
Upvotes: 2 <issue_comment>username_3: You need to change mysql\_fetch\_array() instead of mysql\_fetch\_assoc() look like this
```
$rank = mysql_query("SELECT 1+COUNT(*) AS overall_scores FROM `class_ranking` WHERE overall_scores > (SELECT overall_scores FROM `class_ranking` WHERE ref_id='$id')");
$finalArr = [];
While ($rk = mysql_fetch_assoc($rank)){
$finalArr[] = $rk['overall_scores'];
}
echo '
```
';
print_r($finalArr);
```
```
Upvotes: 0 <issue_comment>username_4: To start, `SELECT 1+COUNT(*)` will return something like
```
+------------+
| 1+count(*) |
+------------+
| 4 |
+------------+
```
Therefore, this won't work
```
While ($rk = mysql_fetch_array($rank)){
echo $rk['overall_scores'];
}
```
because there's not `overall_scores`. You may have to alias it as `SELECT 1+COUNT(*) AS overall_scores FROM...` and do
```
$result=mysql_query("SELECT 1+COUNT(*) AS overall_scores FROM ...");
$data=mysql_fetch_assoc($result);
echo $data['overall_scores']; // Or $data['1+COUNT(*)'] if you did not alias it
```
Upvotes: 1
|
2018/03/16
| 4,753 | 17,927 |
<issue_start>username_0: I am trying to make android build using command
ionic cordova build android
but I am facing error like
>
> Minimum supported Gradle version is 4.1. Current version is 3.3. If using t
> he gradle wrapper, try editing the distributionUrl in D:\BAXA\_POS\Development\tr
> unk\Mobile\baxa\_pos\gradle\wrapper\gradle-wrapper.properties to gradle-4.1-all.zip
>
>
>
code in build.gradle is
```
/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/
apply plugin: 'com.android.application'
buildscript {
repositories {
jcenter()
maven {
url "https://maven.google.com"
}
}
// Switch the Android Gradle plugin version requirement depending on the
// installed version of Gradle. This dependency is documented at
// http://tools.android.com/tech-docs/new-build-system/version-compatibility
// and https://issues.apache.org/jira/browse/CB-8143
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
}
}
// Allow plugins to declare Maven dependencies via build-extras.gradle.
allprojects {
repositories {
jcenter()
maven {
url "https://maven.google.com"
}
}
}
//task wrapper(type: Wrapper) {
// gradleVersion = '2.14.1'
//}
// Configuration properties. Set these via environment variables, build-extras.gradle, or gradle.properties.
// Refer to: http://www.gradle.org/docs/current/userguide/tutorial_this_and_that.html
ext {
apply from: 'CordovaLib/cordova.gradle'
// The value for android.compileSdkVersion.
if (!project.hasProperty('cdvCompileSdkVersion')) {
cdvCompileSdkVersion = null;
}
// The value for android.buildToolsVersion.
if (!project.hasProperty('cdvBuildToolsVersion')) {
cdvBuildToolsVersion = null;
}
// Sets the versionCode to the given value.
if (!project.hasProperty('cdvVersionCode')) {
cdvVersionCode = null
}
// Sets the minSdkVersion to the given value.
if (!project.hasProperty('cdvMinSdkVersion')) {
cdvMinSdkVersion = null
}
// Whether to build architecture-specific APKs.
if (!project.hasProperty('cdvBuildMultipleApks')) {
cdvBuildMultipleApks = null
}
// .properties files to use for release signing.
if (!project.hasProperty('cdvReleaseSigningPropertiesFile')) {
cdvReleaseSigningPropertiesFile = null
}
// .properties files to use for debug signing.
if (!project.hasProperty('cdvDebugSigningPropertiesFile')) {
cdvDebugSigningPropertiesFile = null
}
// Set by build.js script.
if (!project.hasProperty('cdvBuildArch')) {
cdvBuildArch = null
}
// Plugin gradle extensions can append to this to have code run at the end.
cdvPluginPostBuildExtras = []
}
// PLUGIN GRADLE EXTENSIONS START
// PLUGIN GRADLE EXTENSIONS END
def hasBuildExtras = file('build-extras.gradle').exists()
if (hasBuildExtras) {
apply from: 'build-extras.gradle'
}
// Set property defaults after extension .gradle files.
if (ext.cdvCompileSdkVersion == null) {
ext.cdvCompileSdkVersion = privateHelpers.getProjectTarget()
}
if (ext.cdvBuildToolsVersion == null) {
ext.cdvBuildToolsVersion = privateHelpers.findLatestInstalledBuildTools()
}
if (ext.cdvDebugSigningPropertiesFile == null && file('debug-signing.properties').exists()) {
ext.cdvDebugSigningPropertiesFile = 'debug-signing.properties'
}
if (ext.cdvReleaseSigningPropertiesFile == null && file('release-signing.properties').exists()) {
ext.cdvReleaseSigningPropertiesFile = 'release-signing.properties'
}
// Cast to appropriate types.
ext.cdvBuildMultipleApks = cdvBuildMultipleApks == null ? false : cdvBuildMultipleApks.toBoolean();
ext.cdvMinSdkVersion = cdvMinSdkVersion == null ? null : Integer.parseInt('' + cdvMinSdkVersion)
ext.cdvVersionCode = cdvVersionCode == null ? null : Integer.parseInt('' + cdvVersionCode)
def computeBuildTargetName(debugBuild) {
def ret = 'assemble'
if (cdvBuildMultipleApks && cdvBuildArch) {
def arch = cdvBuildArch == 'arm' ? 'armv7' : cdvBuildArch
ret += '' + arch.toUpperCase().charAt(0) + arch.substring(1);
}
return ret + (debugBuild ? 'Debug' : 'Release')
}
// Make cdvBuild a task that depends on the debug/arch-sepecific task.
task cdvBuildDebug
cdvBuildDebug.dependsOn {
return computeBuildTargetName(true)
}
task cdvBuildRelease
cdvBuildRelease.dependsOn {
return computeBuildTargetName(false)
}
task cdvPrintProps << {
println('cdvCompileSdkVersion=' + cdvCompileSdkVersion)
println('cdvBuildToolsVersion=' + cdvBuildToolsVersion)
println('cdvVersionCode=' + cdvVersionCode)
println('cdvMinSdkVersion=' + cdvMinSdkVersion)
println('cdvBuildMultipleApks=' + cdvBuildMultipleApks)
println('cdvReleaseSigningPropertiesFile=' + cdvReleaseSigningPropertiesFile)
println('cdvDebugSigningPropertiesFile=' + cdvDebugSigningPropertiesFile)
println('cdvBuildArch=' + cdvBuildArch)
println('computedVersionCode=' + android.defaultConfig.versionCode)
android.productFlavors.each { flavor ->
println('computed' + flavor.name.capitalize() + 'VersionCode=' + flavor.versionCode)
}
}
android {
sourceSets {
main {
manifest.srcFile 'AndroidManifest.xml'
java.srcDirs = ['src']
resources.srcDirs = ['src']
aidl.srcDirs = ['src']
renderscript.srcDirs = ['src']
res.srcDirs = ['res']
assets.srcDirs = ['assets']
jniLibs.srcDirs = ['libs']
}
}
defaultConfig {
versionCode cdvVersionCode ?: new BigInteger("" + privateHelpers.extractIntFromManifest("versionCode"))
applicationId privateHelpers.extractStringFromManifest("package")
if (cdvMinSdkVersion != null) {
minSdkVersion cdvMinSdkVersion
}
}
lintOptions {
abortOnError false;
}
compileSdkVersion cdvCompileSdkVersion
buildToolsVersion cdvBuildToolsVersion
if (Boolean.valueOf(cdvBuildMultipleApks)) {
productFlavors {
armv7 {
versionCode defaultConfig.versionCode*10 + 2
ndk {
abiFilters "armeabi-v7a", ""
}
}
x86 {
versionCode defaultConfig.versionCode*10 + 4
ndk {
abiFilters "x86", ""
}
}
all {
ndk {
abiFilters "all", ""
}
}
}
}
/*
ELSE NOTHING! DON'T MESS WITH THE VERSION CODE IF YOU DON'T HAVE TO!
else if (!cdvVersionCode) {
def minSdkVersion = cdvMinSdkVersion ?: privateHelpers.extractIntFromManifest("minSdkVersion")
// Vary versionCode by the two most common API levels:
// 14 is ICS, which is the lowest API level for many apps.
// 20 is Lollipop, which is the lowest API level for the updatable system webview.
if (minSdkVersion >= 20) {
defaultConfig.versionCode += 9
} else if (minSdkVersion >= 14) {
defaultConfig.versionCode += 8
}
}
*/
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_6
targetCompatibility JavaVersion.VERSION_1_6
}
if (cdvReleaseSigningPropertiesFile) {
signingConfigs {
release {
// These must be set or Gradle will complain (even if they are overridden).
keyAlias = ""
keyPassword = <PASSWORD>" // And these must be set to non-empty in order to have the signing step added to the task graph.
storeFile = null
storePassword = <PASSWORD>"
}
}
buildTypes {
release {
signingConfig signingConfigs.release
}
}
addSigningProps(cdvReleaseSigningPropertiesFile, signingConfigs.release)
}
if (cdvDebugSigningPropertiesFile) {
addSigningProps(cdvDebugSigningPropertiesFile, signingConfigs.debug)
}
}
dependencies {
compile fileTree(dir: 'libs', include: '*.jar')
// SUB-PROJECT DEPENDENCIES START
debugCompile(project(path: "CordovaLib", configuration: "debug"))
releaseCompile(project(path: "CordovaLib", configuration: "release"))
// SUB-PROJECT DEPENDENCIES END
}
def promptForReleaseKeyPassword() {
if (!cdvReleaseSigningPropertiesFile) {
return;
}
if ('__unset'.equals(android.signingConfigs.release.storePassword)) {
android.signingConfigs.release.storePassword = privateHelpers.promptForPassword('Enter key store password: ')
}
if ('__unset'.equals(android.signingConfigs.release.keyPassword)) {
android.signingConfigs.release.keyPassword = privateHelpers.promptForPassword('Enter key password: ');
}
}
gradle.taskGraph.whenReady { taskGraph ->
taskGraph.getAllTasks().each() { task ->
if (task.name == 'validateReleaseSigning' || task.name == 'validateSigningRelease') {
promptForReleaseKeyPassword()
}
}
}
def addSigningProps(propsFilePath, signingConfig) {
def propsFile = file(propsFilePath)
def props = new Properties()
propsFile.withReader { reader ->
props.load(reader)
}
def storeFile = new File(props.get('key.store') ?: privateHelpers.ensureValueExists(propsFilePath, props, 'storeFile'))
if (!storeFile.isAbsolute()) {
storeFile = RelativePath.parse(true, storeFile.toString()).getFile(propsFile.getParentFile())
}
if (!storeFile.exists()) {
throw new FileNotFoundException('Keystore file does not exist: ' + storeFile.getAbsolutePath())
}
signingConfig.keyAlias = props.get('key.alias') ?: privateHelpers.ensureValueExists(propsFilePath, props, 'keyAlias')
signingConfig.keyPassword = props.get('keyPassword', props.get('key.alias.password', signingConfig.keyPassword))
signingConfig.storeFile = storeFile
signingConfig.storePassword = props.get('storePassword', props.get('key.store.password', signingConfig.storePassword))
def storeType = props.get('storeType', props.get('key.store.type', ''))
if (!storeType) {
def filename = storeFile.getName().toLowerCase();
if (filename.endsWith('.p12') || filename.endsWith('.pfx')) {
storeType = 'pkcs12'
} else {
storeType = signingConfig.storeType // "jks"
}
}
signingConfig.storeType = storeType
}
for (def func : cdvPluginPostBuildExtras) {
func()
}
// This can be defined within build-extras.gradle as:
// ext.postBuildExtras = { ... code here ... }
if (hasProperty('postBuildExtras')) {
postBuildExtras()
}
```
Code for gradle-wrapper.properties
```
#DATE
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
```
I have tried everything like changing distributionUrl and changing default gradle version , but everytime I run android build command distributionUrl change itself to old one.Can anyone suggest me something for this, that can work in my case.<issue_comment>username_1: This is because the following code:
```
task wrapper(type: Wrapper) {
gradleVersion = '2.14.1'
}
```
try commenting it. Then change your `gradle-wrapper.properties` to:
```
#Tue Jan 30 17:55:46 WIB 2018
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-4.1-all.zip
```
Upvotes: 1 <issue_comment>username_2: I got the solution for my problem after trying so many alternatives and i.e.
For those who use ionic, go to
```
[project name]/platforms/android/cordova/lib/builders/GradleBuilder.js
```
on line 164 you will see the following:
```
var distributionUrl = process.env['CORDOVA_ANDROID_GRADLE_DISTRIBUTION_URL'] || 'http\\://services.gradle.org/distributions/gradle-2.13-all.zip';
```
This line is used to create your gradle-wrapper.properties, so any changes to the gradle-wrapper.properties won't matter. All you need to do is change the url to the latest version, sync the gradle and the problem is solved.
If you just want to change the gradle version in the Android studio, go to `File>settings>project` and change the gradle version. After you apply, it will sync the project and you are ready to build.
Upvotes: 7 [selected_answer]<issue_comment>username_3: If you are mac user then simply follow below steps
```
$ brew install gradle
```
Above command install the letest gradle
Then Check the version
```
$ gradle -v
```
Then change the gradle version of build.gradle file .
```
dependencies {
classpath 'com.android.tools.build:gradle:2.2.3'
}
```
It worked for me.
Upvotes: 1 <issue_comment>username_4: Please check you `path` variable for which gradle version.
Upvotes: -1 <issue_comment>username_5: if you're using cordova/phonegap go to
```
platforms/android/app/build.gradle
platforms/android/cordova/lib/builders/GradleBuilder.js
platforms/android/cordova/lib/builders/StudioBuilder.js
```
and look for the old gradle version according to the error then replace it with the new one.
Upvotes: 3 <issue_comment>username_6: I had the same problem with cordova.
You need to edit the gradle version to the latest in these files.
```
platforms/android/app/build.gradle
platforms/android/cordova/lib/builders/GradleBuilder.js
platforms/android/cordova/lib/builders/StudioBuilder.js
platforms/android/build.gradle
```
While I am writing this post, the latest version is a 4.10.2. Current versions can be found [here](https://gradle.org/releases/).
In */android/app/build.gradle* find this part of the code.
```
task wrapper(type: Wrapper) {
gradleVersion = '4.10.2'
}
```
In *GradleBuilder.js* find this part of the code.
```
var distributionUrl = process.env['CORDOVA_ANDROID_GRADLE_DISTRIBUTION_URL'] || 'https\\://services.gradle.org/distributions/gradle-4.10.2-all.zip';
```
In *StudioBuilder.js* find this part of the code.
```
var distributionUrl = process.env['CORDOVA_ANDROID_GRADLE_DISTRIBUTION_URL'] || 'https\\://services.gradle.org/distributions/gradle-4.10.2-all.zip';
```
In last file */android/build.gradle* we change the following line to actual version. Actual version for Android Gradle plugin you can find on [this side](https://developer.android.com/studio/releases/gradle-plugin).
```
classpath 'com.android.tools.build:gradle:3.1.2'
```
Thanks username_5
Upvotes: 5 <issue_comment>username_7: In Linux run these commands:
```
gedit ~/.bashrc
```
Put the export in the end of file.
```
export CORDOVA_ANDROID_GRADLE_DISTRIBUTION_URL="https\\://services.gradle.org/distributions/gradle-4.6-all.zip"
```
Run these command to load the new variable.
```
source ~/.bashrc
```
Check if the version of gradle in export url is your desire.
Upvotes: 3 <issue_comment>username_8: For Ionic users:
After changing the URL everywhere (in the js, in the gradle file etc etc), it was still not working.
To make it work, I had to do this:
```
ionic cordova rm android
ionic cordova add android@latest
```
Remove the android platform then add it again.
Upvotes: 2 <issue_comment>username_9: Go to Project->android->Wrapper->Gradle->Wrapper->Open gradle-wrapper.properties
change your version:
```
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-4.1-all.zip
```
change this line:
```
distributionUrl=https\://services.gradle.org/distributions/gradle-4.6-all.zip
```
Rebuild your application
Upvotes: 2 <issue_comment>username_10: to Android studio I just cleaned subfolders in /app/.gradle
After that, gradle was able to build the wrapper correct version
Upvotes: 0 <issue_comment>username_11: For Ionic 4 users:
Shilpi's Answer is correct except that the path for building gradle can now be found in:
```
[project name]/platforms/android/cordova/lib/builders/ProjectBuilder.js
```
Search for `distributionUrl`
Upvotes: 1 <issue_comment>username_12: change gradle-wrapper.properties distributionUrl with latest gradle link
```
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-4.1-all.zip
```
Upvotes: 0 <issue_comment>username_13: I faced with this issue after cordova cli update **9** to **10** and with this upgrading Android Studio **4.0.0** to **4.0.1**.
I have not much modified Cordova Android app, and used
**cordova platforms rm android**
after that
**cordova platforms add android**
This commands updated my cordova android version too.
Older
"cordova-android": "^8.1.0"
Newest
"cordova-android": "^9.0.0"
Upvotes: 0
|
2018/03/16
| 562 | 2,398 |
<issue_start>username_0: When I receive message from FCM, I am able to print all it's content till the point
```
if let message = userInfo[AnyHashable("message")] {
print(message)
}
```
Message body contains string like => `{"sent_at":1521203039,"sender":{"name":"sender_name","id":923},"id":1589,"body":"sdfsadf sdfdfsadf"}`
Message type is **Any**, I wish to read name and body from this message object.
```
func handleNotification(_ userInfo: [AnyHashable: Any]) -> Void {
if let notificationType = userInfo["job_type"] as? String {
if notificationType == "mobilock_plus.message" {
//broadcast message recieved
if let message = userInfo[AnyHashable("message")] {
print(message)
//TODO :- read name and body of message object.
}
}
}
}
```<issue_comment>username_1: I think what you are looking at is converting a string to Json object.
The following answer can help you do it
[How to convert a JSON string to a dictionary?](https://stackoverflow.com/questions/30480672/how-to-convert-a-json-string-to-a-dictionary#comment49040110_30480672)
Upvotes: 1 <issue_comment>username_2: So with the help of @username_1 answer, i was able to get values like below.
```
if let messageString = userInfo[AnyHashable("message")] as? String {
if let dictionaryMessage = UtilityMethods.shared.convertToDictionary(text: messageString) {
if let messageBody = dictionaryMessage["body"] as? String {
if let sender = dictionaryMessage["sender"] as? [String:Any] {
if let senderName = sender["name"] as? String {
}
}
}
}
}
```
function to convert JSON string into Dictionary
```
func convertToDictionary(text: String) -> [String: Any]? {
if let data = text.data(using: .utf8) {
do {
return try JSONSerialization.jsonObject(with: data, options: []) as? [String: Any]
} catch {
print(error.localizedDescription)
}
}
return nil
}
```
Upvotes: 1 [selected_answer]
|
2018/03/16
| 1,427 | 4,127 |
<issue_start>username_0: on bash 4.4.12 using jq 1.5 with this one-liner `IFS=_ read -r -a a < <(jq -ncj '["a","b","c"][]+"_"') ; printf '%s\n' "${a[@]}"` I get a properly delimited output
>
> a
>
>
> b
>
>
> c
>
>
>
for elements a, b and c respectively, BUT if I try the same thing with a null delimiter like so: `IFS= read -r -a a < <(jq -ncj '["a","b","c"][]+"\u0000"') ; printf '%s\n' "${a[@]}"` then I would get only one array element containing
>
> abc
>
>
>
Why doesn't this work like expected?
Furthermore, if you try `IFS= read -r -d '' -a a < <(jq -ncj '["a","b","c"][]+"\u0000"') ; printf '%s\n' "${a[@]}`, you will be surprised to get an array with only the first "**a**" element:
>
> a
>
>
>
My goal is to find an approach without iterating over elements with any kind of a loop.
Edit: `**readarray -d**` is not a solution since i need the piece of code to run in bash prior to version 4.4<issue_comment>username_1: Use `readarray`, which gained a `-d` analogous to the same option on `read` in `bash` 4.4:
```
$ readarray -d $'\0' -t a < <(jq -ncj '["a","b","c"][]+"\u0000"')
$ declare -p a
declare -a a=([0]="a" [1]="b" [2]="c")
```
`-d ''` works as well; since shell strings are null terminated, `''` is, technically, the string containing the null character.
---
Without `readarray -d` support, you can use a `while` loop with `read`, which should work in any version of `bash`:
```
a=()
while read -d '' -r item; do
a+=("$item")
done < <( jq -ncj '["a","b","c"][]+"\u0000"' )
```
This is the best you can do unless you know something about the array elements that would let you pick an alternate delimiter that isn't part of any of the elements.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Since you are NOT using jq's `-r` option, the question arises as to whether the problem as posed in the title is perhaps an ["XY" problem](http://mywiki.wooledge.org/XyProblem). If the goal is simply to assign JSON values to a bash array, consider:
```
$ readarray -t a < <(jq -nc '["a","b","c"][]') ; printf '%s\n' "${a[@]}"
"a"
"b"
"c"
```
Notice that the bash array values are recognizably JSON values (in this case, JSON strings, complete with the double-quotation marks).
Even more tellingly:
```
$ readarray -t a < <(jq -nc '["a\\b","\"b\"","c"][]') ; printf '%s\n' "${a[@]}"
"a\\b"
"\"b\""
"c"
```
Compare the loss of "JSONarity" that happens when using readarray with NUL:
```
$ readarray -d "" a < <(jq -ncj '["a\\b","\"b\"","c"][]+"\u0000"') ; printf '%s\n' "${a[@]}"
a\b
"b"
c
```
Upvotes: 0 <issue_comment>username_3: I'm assuming that you want to switch to using a null delimiter instead of \_ in order to increase reliability of your scripts. However, the safest way to read json elements is not by using the null delimiter since that is allowed json text according to RFC7159 ([page 8](https://www.rfc-editor.org/rfc/rfc7159#page-8)). E.g. if `["a","b","c"]` were to look like `["a","b\u0000","c"]` and you were to append the null char to each of the strings and parse these with a null delimiter, the "b" element would go into two separate bash array slots.
Instead, given that newlines are always escaped within json-strings when using e.g. `jq -c` I suggest relying on the part of the spec that says
>
> "A string begins and ends with quotation marks."
>
>
>
With that in mind we can define:
```
jsonStripQuotes(){ local t0; while read -r t0; do t0="${t0%\"}"; t0="${t0#\"}"; printf '%s\n' "$t0"; done < <(jq '.');}
```
And then, e.g.
```
echo '["a\u0000 b\n","b\nnn","c d"]' | jq .[] | jsonStripQuotes
```
..should safely print each json string on separate lines(expanded newline appended), with all newlines and null's within the strings escaped. After that I would do a read with IFS set to newline only:
```
while IFS=$'\n' do read -r elem; Arr+=("$elem") ; done < <(echo '["a\u0000 b\n","b\nnn","c d"]' | jq .[] | stripJsonQuotes)
```
And then if you want to print them with newlines etc. expanded:
```
printf '%b' "${Arr[*]}"
```
I believe this is the most reliable way to parse json strings to a bash array.
Upvotes: 1
|
2018/03/16
| 954 | 3,788 |
<issue_start>username_0: **Problem**
We have a quite complex application and we don't want in each test case to go through the whole process to get to specific screen to test it, alternatively we just want to jump to specific one with some state stored in redux store.
---
**What I've tried**
I made multiple initial states which loads specific screen so I can test it directly and for each run of detox test I load different mocha.opts to select this portion of test cases and used 'react-native-config' so I can load different state in each run so for example for loading a screen I will do the following:
1. Create initialState for redux store that has all the details of the screen I'm currently testing.
2. Create mocha.opts to run only this test case by specifying the -f flag in it.
3. Create .env.test.screenX file which will tell the store which initial state to load according to which ENVFILE I select.
4. Create different configuration to each screen in detox so it can load the correct mocha opts through the detox CLI.
5. each time I run the command ENVFILE=env.test.screenX react-native run-ios so the project would be built using this configuration and I can then run the detox test -c .
---
**Question**
My method is so complex and require alot of setup and overhead to run test for each screen so I was wondering if any one had the same issue and how could I solve it? In general how can I deal with the react native thread in detox?<issue_comment>username_1: I think there is no way detox can communicate with react native thread in runtime and change state, so I thought of a little hack that uses mocking technique as [Leo Natan](https://stackoverflow.com/users/983912/leo-natan) mentioned, it could be useful in your case
you could mock your App.js file with a screen (App.e2e.js) that has some buttons with known testIDs each button dispatch all the actions needed to load a specific state to store, you can start each test suite by pressing one of the buttons in the `beforeEach` method then you can start your normal test flow after that
**for example:**
if you want to test a screen that is far away (requires too many clicks to reach when the user actually use the app) and requires authentication you could have the following structure:
App.e2e.js has 2 buttons:
* one for authentication that dispatch an action like `onAuthenticationSuccess(user, authToken)`
* another one for navigation to that screen `this.navigation.navigate("screenName")`
test.js
```
describe("Screen work as intended", () => {
beforeEach(async () => {
await device.reloadReactNative();
await element(by.id("authButtonID")).tap();
await element(by.id("navigateButtonID")).tap();
});
it("should do something", async () => {
//user is loaded in store
//current screen is the screen you want to test
});
});
```
Upvotes: 3 <issue_comment>username_2: If you are using Expo and it's [release-channels](https://docs.expo.io/versions/latest/distribution/release-channels/) for specifying the environment, here is what you could do:
1. Create a method `resetStorage` like suggested here:
[How to reset the state of a Redux store?](https://stackoverflow.com/questions/35622588/how-to-reset-the-state-of-a-redux-store) (which you could have already implemented in `logout`)
2. In App.js import `resetStorage` method
3. In App.js add: `import { Constants } from 'expo'`
4. Then add a button with `testID="resetStorageBtn"` to your `render` method which you can use for testing purposes and which won't be visible in production release-channel.
So `render` might look similar to this:
```
return (
{Constants.manifest.releaseChannel !== 'production' &&
(
resetStorage()} testID="resetStorageBtn">
Reset storage
)}
);
```
Upvotes: 0
|
2018/03/16
| 679 | 2,560 |
<issue_start>username_0: In xcode it shows like below image. I am trying to push my code. But I dont know what is missing by me.
[](https://i.stack.imgur.com/KHoEM.png)<issue_comment>username_1: I think there is no way detox can communicate with react native thread in runtime and change state, so I thought of a little hack that uses mocking technique as [Leo Natan](https://stackoverflow.com/users/983912/leo-natan) mentioned, it could be useful in your case
you could mock your App.js file with a screen (App.e2e.js) that has some buttons with known testIDs each button dispatch all the actions needed to load a specific state to store, you can start each test suite by pressing one of the buttons in the `beforeEach` method then you can start your normal test flow after that
**for example:**
if you want to test a screen that is far away (requires too many clicks to reach when the user actually use the app) and requires authentication you could have the following structure:
App.e2e.js has 2 buttons:
* one for authentication that dispatch an action like `onAuthenticationSuccess(user, authToken)`
* another one for navigation to that screen `this.navigation.navigate("screenName")`
test.js
```
describe("Screen work as intended", () => {
beforeEach(async () => {
await device.reloadReactNative();
await element(by.id("authButtonID")).tap();
await element(by.id("navigateButtonID")).tap();
});
it("should do something", async () => {
//user is loaded in store
//current screen is the screen you want to test
});
});
```
Upvotes: 3 <issue_comment>username_2: If you are using Expo and it's [release-channels](https://docs.expo.io/versions/latest/distribution/release-channels/) for specifying the environment, here is what you could do:
1. Create a method `resetStorage` like suggested here:
[How to reset the state of a Redux store?](https://stackoverflow.com/questions/35622588/how-to-reset-the-state-of-a-redux-store) (which you could have already implemented in `logout`)
2. In App.js import `resetStorage` method
3. In App.js add: `import { Constants } from 'expo'`
4. Then add a button with `testID="resetStorageBtn"` to your `render` method which you can use for testing purposes and which won't be visible in production release-channel.
So `render` might look similar to this:
```
return (
{Constants.manifest.releaseChannel !== 'production' &&
(
resetStorage()} testID="resetStorageBtn">
Reset storage
)}
);
```
Upvotes: 0
|
2018/03/16
| 723 | 2,726 |
<issue_start>username_0: I used have a template string, read from DB and or text file:
```cs
var template="Your order {0} has dispatched from {1}, ETA {2:HH:MM dd.MM.yyyy}";
```
And then used `string.Format(template, orderNo, warehouse, eta)`
to inject correct values.
Now, a business decision was made, to change those templates to use more meaningful markers:
```cs
var template="Your order {orderNo} has dispatched from {warehouse}, ETA {eta:HH:MM dd.MM.yyyy}";
```
And now I'm not sure what would be the best way to inject values, taking into consideration that, for example, `eta` field has formatting that is "hardcoded" into template (so that we could use different `DateTime` format depending on language)
One idea was to load the template into `StringBuilder`, replace named variables into set of `{0}, {1}`.. etc, And then use old `string.Format` method, but it's not really scalable and pain in the back to write all the boilerplate.
```cs
var formatTemplate = new StringBuilder(template)
.Replace("{orderNo}", "{0}")
.Replace("{warehouse}", "{1}")
.Replace("{eta","{2")
.ToString();
return string.Format(template, orderNo, warehouse, eta)
```
Is there a better way to do it? Maybe a way to read the template as interpolated string? The string is read from an outside source so I cannot simply use the `$"{marker}"` mechanic.<issue_comment>username_1: Please check out [SmartFormat](https://github.com/scottrippey/SmartFormat.NET) which in effect will do what you want to.
(You sayed, that you do not use replacing the human readable names with the indexes)
Example from their wiki:
String.Format references all args by index:
```
String.Format("{0} {1}", person.FirstName, person.LastName)
```
Smart.Format takes this a step further, and lets you use named placeholders instead:
```
Smart.Format("{FirstName} {LastName}", person)
```
In fact, Smart.Format supports several kinds of expressions:
```
Smart.Format("{FirstName.ToUpper} {LastName.ToLower}", person)
```
Working example requires the variables to be packed into an anonymous type:
```
var formatted=Smart.Format(template, new { orderNo, warehouseName, eta })
```
that returns correct, desired, result, with markers correctly replaced. Without the anonymous type it didn't seem to work.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use string interpolation if your framework supports:
Have a look at this:
<https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/interpolated-strings>
You can then update your code something like:
```
var orderNo = 1;
var warehouse = "Trinidad";
var template = $"Your order {orderNo} has dispatched from {warehouse}";
```
Upvotes: -1
|
2018/03/16
| 583 | 2,584 |
<issue_start>username_0: I have a very similar problem to this post: [Rest Assured doesn't use custom ObjectMapper](https://stackoverflow.com/questions/43127485/rest-assured-doesnt-use-custom-objectmapper)
However, I am using slightly different configurations/models and the provided answer did not solve my problem.
When running the mock mvc test, my custom ObjectMapper config is not getting picked up. I have tried attaching this config class to the RestAssuredMockMvc, but it still seems to not pick this up when the standalone setup fires.
Any ideas?
```
@RunWith(SpringRunner.class)
@SpringBootTest(classes = Application.class)
public abstract class BaseClassForContractTests {
@Autowired
Controller controller;
@Before
public void setup() {
RestAssuredMockMvc.config = RestAssuredMockMvcConfig.config().objectMapperConfig(
ObjectMapperConfig.objectMapperConfig().jackson2ObjectMapperFactory(new Jackson2ObjectMapperFactory() {
@SuppressWarnings("rawtypes")
@Override
public ObjectMapper create(Class cls, String charset) {
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.registerModule(new JavaTimeModule());
objectMapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
//objectMapper.setTimeZone(TimeZone.getTimeZone("America/New_York"));
//objectMapper.configure(DeserializationFeature.ADJUST_DATES_TO_CONTEXT_TIME_ZONE, true);
return objectMapper;
}
})
);
RestAssuredMockMvc.standaloneSetup(controller);
}
```
}<issue_comment>username_1: Yep, it was something stupid/simple. If anyone has this issue with no configs getting picked up, you should be using RestAssuredMockMvc.**webAppContextSetup**(context); instead of standaloneSetup. All you have to do is @Autowire a webAppContext.
Upvotes: 0 <issue_comment>username_2: I have setup object mapper with `RestAssuredMockMvc.standaloneSetup` and it worked for me.
```
MappingJackson2HttpMessageConverter mappingJackson2HttpMessageConverter = new
MappingJackson2HttpMessageConverter();
mappingJackson2HttpMessageConverter.setObjectMapper(new ObjectMapper()));
RestAssuredMockMvc.standaloneSetup(
MockMvcBuilders.standaloneSetup(yourcontroller)
.setMessageConverters(mappingJackson2HttpMessageConverter)
);
```
[The full solution is described in this article](https://medium.com/@vicusbass/datetime-serialization-in-spring-tests-5d31ccd025c)
Upvotes: 1
|
2018/03/16
| 454 | 1,864 |
<issue_start>username_0: Created a blank ASP.NET Core 2.0 application.
In Startup.cs, would like to log incoming requests. So in configure method, I am using Microsoft.Extensions.Logging.ILogger
```
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILogger logger)
{
app.Use(next =>
{
return async context =>
{
logger.LogInformation("Incoming request");
await next(context);
logger.LogInformation("Outgoing response");
};
});
```
However, when I build the project, its complaining
```
An error occurred while starting the application.
InvalidOperationException: No service for type 'Microsoft.Extensions.Logging.ILogger' has been registered.
```
Why and how should I register this service? Had it been my interface, it would have still made sense to do
```
services.AddScope
```
in ConfigureServices<issue_comment>username_1: ILogger is always of a type, try change it like this:
```
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILogger logger)
{
app.Use(next =>
{
return async context =>
{
logger.LogInformation("Incoming request");
await next(context);
logger.LogInformation("Outgoing response");
};
});
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: No service for type 'Microsoft.Extensions.Logging.ILogger' has been registered
This error means that you didn't provide class name into your logger interface(instance of the interface), for example instead of:
>
> ILogger logger
>
>
>
```
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILogger logger)
{
```
You should use:
>
> ILogger logger
>
>
>
```
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILogger logger)
{
```
Upvotes: 0
|
2018/03/16
| 2,798 | 11,645 |
<issue_start>username_0: My string is like this:
```
string str = "Psppsp palm springs airport, 3400 e tahquitz canyon way, Palm springs, CA, US, 92262-6966 psppsp";
```
I get the string "psppsp" separately and need to compare it with the first and last word in str and if found (at first or last word), need to remove it from str.
I need to know the best and fastest way for the same.<issue_comment>username_1: The fasted way is O(n). Below is an example of the code, it can be improved.
```
string str = "Psppsp palm springs airport, 3400 e tahquitz canyon way, Palm springs, CA, US, 92262-6966 psppsp";
string word = "psppsp";
// Check if str and word are equals
if (str == word)
{
str = "";
}
// Check Firt word in str
if (str.Length > word.Length)
{
bool equal = true;
for (int i = 0; i < word.Length; i++)
{
if (str[i] != word[i])
{
equal = false;
break;
}
}
if (equal && str[word.Length] == ' ')
{
str = str.Substring(word.Length);
}
}
// Check Last word in str
if (str.Length > word.Length)
{
bool equal = true;
for (int i = word.Length - 1; i >= 0; i--)
{
if (str[str.Length - word.Length + i] != word[i])
{
equal = false;
break;
}
}
if (equal)
{
str = str.Substring(0, str.Length - word.Length);
}
}
```
Upvotes: 0 <issue_comment>username_2: There are a few ways to do this. Here's one way using Regex. You can pre-compile the regex which will speed things up if you are doing this on many strings:
```
string str = "Psppsp palm springs airport, 3400 e tahquitz canyon way, Palm springs, CA, US, 92262-6966 psppsp";
string match = "psppsp";
// Build 2 re-usable regexes
string pattern1 = "^" + match + "\\s*";
string pattern2 = "\\s*" + match + "$";
Regex rgx1 = new Regex(pattern1, RegexOptions.Compiled | RegexOptions.IgnoreCase);
Regex rgx2 = new Regex(pattern2, RegexOptions.Compiled | RegexOptions.IgnoreCase);
// Apply the 2 regexes
str = rgx1.Replace(rgx2.Replace(str, ""), "");
```
If there's no chance the match will be elsewhere on the string, you can use linq. This involves converting the array returned by split to a list:
```
// Convert to list
var tempList = new List(str.Split());
// Remove all occurences of match
tempList.RemoveAll(x => String.Compare(x, match, StringComparison.OrdinalIgnoreCase) == 0);
// Convert list back to string
str = String.Join(" ", tempList.ToArray());
```
Or, a more simple method
```
if (str.StartsWith(match, StringComparison.InvariantCultureIgnoreCase)) {
str = str.Substring(match.Length);
}
if (str.EndsWith(match, StringComparison.InvariantCultureIgnoreCase)) {
str = str.Substring(0, str.Length - match.Length);
}
str = str.Trim();
```
Not sure which (if any) of these is "best". I like the last one.
Upvotes: 1 <issue_comment>username_3: You can use str.StartsWith(x), str.EndsWith(x), str.Contains(x), str.IndexOf(x) to find and locate your search string and str.Substring(start, len) to change the string. There are many many ways you can achieve this string manipulation, but you asked for...
Best and fastest: Let's use some completely safe 'unsafe' code so we can work with pointers.
```
// note this is an extension method so you need to include it in a static class
public unsafe static string RemoveCaseInsensitive(this string source, string remove)
{
// convert to lower to enable case insensitive comparison
string sourceLower = source.ToLower();
// define working pointers
int srcPos = 0;
int srcLen = source.Length;
int dstPos = 0;
int rmvPos = 0;
int rmvLen = remove.Length;
// create char arrays to work with in the 'unsafe' code
char[] destChar = new char[srcLen];
fixed (char* srcPtr = source, srcLwrPtr = sourceLower, rmvPtr = remove, dstPtr = destChar)
{
// loop through each char in the source array
while (srcPos < srcLen)
{
// copy the char and move dest position on
*(dstPtr + dstPos) = *(srcPtr + srcPos);
dstPos++;
// compare source char to remove char
// note we're comparing against the sourceLower but copying from source so that
// a case insensitive remove preserves the rest of the string's original case
if (*(srcLwrPtr + srcPos) == *(rmvPtr + rmvPos))
{
rmvPos++;
if (rmvPos == rmvLen)
{
// if the whole string has been matched
// reverse dest position back by length of remove string
dstPos -= rmvPos;
rmvPos = 0;
}
}
else
{
rmvPos = 0;
}
// move to next char in source
srcPos++;
}
}
// return the string
return new string(destChar, 0, dstPos);
}
```
Usage:
```
str.RemoveCaseInsensitive("Psppsp"); // this will remove all instances throughout the string
str.RemoveCaseInsensitive("Psppsp "); // space included at the end so in your example will remove the first instance and trailing space.
str.RemoveCaseInsensitive(" psppsp"); // space included at the start so in your example will remove the final instance and leading space.
```
Why use unsafe code you may ask? When dealing with arrays, every time you point to an element in that array, a bounds check is done. So str[1], str[2], str[3], etc each has overhead. So when you're dealing with this sort of check on thousands of characters, it adds up. Using unsafe code enables accessing memory directly using pointers. There is no bounds check, or anything else slowing the operation down. The difference in performance can be massive.
As an example of the performance difference, I have created two versions of this. One safe using standard string pointers and the unsafe one. I have created a string by recursively adding thousands of copies of strings to keep and to remove. The results are clear, the unsafe version completes in half the time of the safe version. These methods are identical apart from being safe vs unsafe.
```
public static class StringExtensions
{
public unsafe static string RemoveUnsafe(this string source, string remove)
{
// convert to lower to enable case insensitive comparison
string sourceLower = source.ToLower();
// define working pointers
int srcPos = 0;
int srcLen = source.Length;
int dstPos = 0;
int rmvPos = 0;
int rmvLen = remove.Length;
// create char arrays to work with in the 'unsafe' code
char[] destChar = new char[srcLen];
fixed (char* srcPtr = source, srcLwrPtr = sourceLower, rmvPtr = remove, dstPtr = destChar)
{
// loop through each char in the source array
while (srcPos < srcLen)
{
// copy the char and move dest position on
*(dstPtr + dstPos) = *(srcPtr + srcPos);
dstPos++;
// compare source char to remove char
// note we're comparing against the sourceLower but copying from source so that
// a case insensitive remove preserves the rest of the string's original case
if (*(srcLwrPtr + srcPos) == *(rmvPtr + rmvPos))
{
rmvPos++;
if (rmvPos == rmvLen)
{
// if the whole string has been matched
// reverse dest position back by length of remove string
dstPos -= rmvPos;
rmvPos = 0;
}
}
else
{
rmvPos = 0;
}
// move to next char in source
srcPos++;
}
}
// return the string
return new string(destChar, 0, dstPos);
}
public static string RemoveSafe(this string source, string remove)
{
// convert to lower to enable case insensitive comparison
string sourceLower = source.ToLower();
string removeLower = remove.ToLower();
// define working pointers
int srcPos = 0;
int srcLen = source.Length;
int dstPos = 0;
int rmvPos = 0;
int rmvLen = remove.Length;
// create char arrays to work with in the 'unsafe' code
char[] destChar = new char[srcLen];
// loop through each char in the source array
while (srcPos < srcLen)
{
// copy the char and move dest position on
destChar[dstPos] = source[srcPos];
dstPos++;
// compare source char to remove char
// note we're comparing against the sourceLower but copying from source so that
// a case insensitive remove preserves the rest of the string's original case
if (sourceLower[srcPos] == removeLower[rmvPos])
{
rmvPos++;
if (rmvPos == rmvLen)
{
// if the whole string has been matched
// reverse dest position back by length of remove string
dstPos -= rmvPos;
rmvPos = 0;
}
}
else
{
rmvPos = 0;
}
// move to next char in source
srcPos++;
}
// return the string
return new string(destChar, 0, dstPos);
}
}
```
This is the benchmarking:
```
internal static class StringRemoveTests
{
private static string CreateString()
{
string x = "xxxxxxxxxxxxxxxxxxxx";
string y = "GoodBye";
StringBuilder sb = new StringBuilder();
for (int i = 0; i < 1000000; i++)
sb.Append(i % 3 == 0 ? y : x);
return sb.ToString();
}
private static int RunBenchMarkUnsafe()
{
string str = CreateString();
DateTime start = DateTime.Now;
string str2 = str.RemoveUnsafe("goodBYE");
DateTime end = DateTime.Now;
return (int)(end - start).TotalMilliseconds;
}
private static int RunBenchMarkSafe()
{
string str = CreateString();
DateTime start = DateTime.Now;
string str2 = str.RemoveSafe("goodBYE");
DateTime end = DateTime.Now;
return (int)(end - start).TotalMilliseconds;
}
public static void RunBenchmarks()
{
Console.WriteLine("Safe version: " + RunBenchMarkSafe());
Console.WriteLine("Unsafe version: " + RunBenchMarkUnsafe());
}
}
class Program
{
static void Main(string[] args)
{
StringRemoveTests.RunBenchmarks();
Console.ReadLine();
}
}
```
The output: (results are milliseconds)
```
// 1st run
Safe version: 569
Unsafe version: 260
// 2nd run
Safe version: 709
Unsafe version: 329
// 3rd run
Safe version: 486
Unsafe version: 279
```
Upvotes: 0
|
2018/03/16
| 825 | 2,730 |
<issue_start>username_0: There is `vector` of `shared_ptr` to base class.
```
struct Base
{
virtual ~Base() = 0 {}
};
struct Derived1 : Base
{
};
struct Derived2 : Base
{
};
std::vector> v;
v.push\_back(std::make\_shared(Derived1()));
v.push\_back(std::make\_shared(Derived2()));
```
How can I make a copy of the `vector`?
Pointers of the copy must point to new objects.<issue_comment>username_1: Just copy it.
```
auto v2 = v;
```
The container holds shared pointers so they will be fine after a copy and still point to the same objects while keeping them alive. There's no problem here.
Upvotes: 0 <issue_comment>username_2: Do you want deep copy of shared\_ptr collection?
I will only refer to the following article.
[Deep copy constructor with std::vector of smart pointers](https://stackoverflow.com/questions/23716018/deep-copy-constructor-with-stdvector-of-smart-pointers)
---
And this is the code I just wrote.
```
#include
#include
#include
#include
#include
#include
using namespace std;
class Parent {
string \_name;
public:
virtual shared\_ptr clone() const = 0;
Parent(const string& name) { \_name = name; }
string getName() { return \_name; }
void setName(const string& name) { \_name = name; }
};
class Child : public Parent {
public:
Child(const string& name) : Parent(name) {}
virtual shared\_ptr clone() const { return make\_shared(\*this); }
};
int main()
{
vector> origins =
{
shared\_ptr(new Child("ant")),
shared\_ptr(new Child("bee")),
shared\_ptr(new Child("cat")),
};
vector> clones;
// copy origins to clones
transform(
origins.begin(),
origins.end(),
back\_inserter(clones),
[](const shared\_ptr& ptr) -> shared\_ptr { return ptr->clone(); }
);
// modify values of origins
for (const auto& origin : origins) { origin->setName(origin->getName() + "!"); }
// print origins (modified)
cout << "" << endl;
for (const auto& origin : origins) { cout << origin->getName() << endl; }
// print clones (not modified)
cout << "" << endl;
for (const auto& clone : clones) { cout << clone->getName() << endl; }
return 0;
}
```
Upvotes: 2 <issue_comment>username_3: You should add a pure `virtual` member function like `clone` to `Base`. Implement it in your derived `class`es, then do something like this:
```
std::vector> copy(std::shared\_ptr const &input) {
std::vector> ret;
ret.reserve(input.size());
for(auto const &p: input) {
ret.push\_back(p->clone());
}
return ret;
}
```
That being said, this is a bad idea. You're breaking semantics such as direct assignment of `vector`s and copy constructors, since they won't do what users will expect (assuming you actually need to actually make a new instance of each object).
Upvotes: 2
|
2018/03/16
| 1,125 | 3,038 |
<issue_start>username_0: How can I split URIPATHPARAM in grok filter.
Here is my grok pattern.
```
grok {
match => ["message", "%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:loadbalancer} %{IP:client_ip}:%{NUMBER:client_port:int} (?:%{IP:backend_ip}:%{NUMBER:backend_port:int}|-) %{NUMBER:request_processing_time:float} %{NUMBER:backend_processing_time:float} %{NUMBER:response_processing_time:float} (?:%{NUMBER:elb_status_code:int}|-) (?:%{NUMBER:backend_status_code:int}|-) %{NUMBER:received_bytes:int} %{NUMBER:sent_bytes:int} \"(?:%{WORD:verb}|-) (?:%{GREEDYDATA:request}|-) (?:HTTP/%{NUMBER:httpversion}|-( )?)\" \"%{DATA:userAgent}\"( %{NOTSPACE:ssl_cipher} %{NOTSPACE:ssl_protocol})?"]
}
grok {
match => [ "request", "%{URIPROTO:http_protocol}://(?:%{USER:user}(?::[^@]*)?@)?(?:%{URIHOST:refhost})?(?:%{URIPATHPARAM:uri_param})?" ]
}
```
}
Values coming in URI\_param
```
/a1/post/abcxyz/data/adfs/
/partner/uc/article/adafdf?adfaf
```
I want to catch first three strings of above url's in separate field e.g.
>
> /a1/post/abcxyz
>
>
> /partner/uc/article
>
>
><issue_comment>username_1: use the grokpattern below on the uri\_param field
```
%{THREESTRINGS:newField}
```
where the custom pattern for THREESTRINGS is
```
THREESTRINGS \/\b\w+\b\/\b\w+\b\/\b\w+\b
```
Upvotes: 0 <issue_comment>username_2: ```
grok {
match => ["message", "%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:loadbalancer} %{IP:client_ip}:%{NUMBER:client_port:int} (?:%{IP:backend_ip}:%{NUMBER:backend_port:int}|-) %{NUMBER:request_processing_time:float} %{NUMBER:backend_processing_time:float} %{NUMBER:response_processing_time:float} (?:%{NUMBER:elb_status_code:int}|-) (?:%{NUMBER:backend_status_code:int}|-) %{NUMBER:received_bytes:int} %{NUMBER:sent_bytes:int} \"(?:%{WORD:verb}|-) (?:%{GREEDYDATA:request}|-) (?:HTTP/%{NUMBER:httpversion}|-( )?)\" \"%{DATA:userAgent}\"( %{NOTSPACE:ssl_cipher} %{NOTSPACE:ssl_protocol})?"]
}
grok {
match => [ "request", "%{URIPROTO:http_protocol}://(?:%{USER:user}(?::[^@]*)?@)?(?:%{URIHOST:refhost})?(?:%{URIPATHPARAM:uri_param})?" ]
}
if [uri_param] {
mutate {
split => { "uri_param" => "/"}
add_field => { "uri_param_1" => "%{[uri_param][1]}" }
add_field => { "uri_param_2" => "%{[uri_param][2]}" }
add_field => { "uri_param_3" => "%{[uri_param][3]}" }
}
}
```
Or Alternately, You could just grab these three params from grok itself.
like
```
grok {
match => [ "request", "%{URIPROTO:http_protocol}://(?:%{USER:user}(?::[^@]*)?@)?(?:%{URIHOST:refhost})?(?:/%{WORD:uri_param_1}/%{WORD:uri_param_2}/%{WORD:uri_param_3}/%{GREEDYDATA:other_params})?" ]
}
```
As asked by you, to join them again, you can simply use mutate filter:
```
mutate {
add_field => { "uri_param" => "/%{[uri_param_1]}/%{[uri_param_2]}/%{[uri_param_3]}/%{[other_params]}"}
}
```
I hope that will work, just test it out, and let me know if that worked for you or not.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 724 | 2,718 |
<issue_start>username_0: 
In above screen, you can see I am using a `UIStackView` to fill radio buttons vertically. problem is my radio buttons not utilising the full width of `UIStackView` when I use `stackV.alignment = .leading` it shows label as "dis..lified" instead of disqualified.
**UISTackView Code**
```
let ratingStackView : UIStackView = {
let stackV = UIStackView()
stackV.translatesAutoresizingMaskIntoConstraints = false
stackV.backgroundColor = UIColor.yellow
stackV.axis = .vertical
stackV.distribution = .fillEqually
stackV.alignment = .leading
return stackV
}()
```
**Layout of UIStackView**
```
func setupView(){
view.addSubview(ratingStackView)
ratingStackView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor).isActive = true
ratingStackView.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor,constant: 8).isActive = true
ratingStackView.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor).isActive = true
ratingStackView.heightAnchor.constraint(equalToConstant: 200).isActive = true
//Add radio buttons to stackview
for ratingButton in ratingRadioButtons{
ratingStackView.addArrangedSubview(ratingButton)
}
}
```
what property I need to set to utilize full width can you please tell I am new to the Swift for radio buttons. I am using `DLRadioButton`.<issue_comment>username_1: Leave `alignment` with its default value, i.e. `.fill` – this stretches arranged views in a direction perpendicular to the stack’s axis.
Actually, I suspect that if you are using `.leading` alignment and do not specify widths of nested controls you are getting auto layout warnings during runtime (could be checked in Visual Debugger in Xcode).
Upvotes: 2 <issue_comment>username_2: Try proportional distribution.
One more thing to try...Reduce the content hugging priority of the labels.
Upvotes: 1 <issue_comment>username_3: To get this working, you need to make following changes in the layout:
**1.** Set `UIStackView's` `alignment` property to `fill`, i.e.
```
stackV.alignment = .fill
```
**2.** Set `UIButton's` `Horizontal Alignment` to `left` wherever you are creating the `RadioButton` either in `.xib` file or through code.
In `.xib`, you can find the property in interface here:
[](https://i.stack.imgur.com/xHXCl.png)
if you are creating the button using code, use the following line of code:
```
ratingButton.contentHorizontalAlignment = .left
```
Let me know if you still face the issue. Happy coding..
Upvotes: 6 [selected_answer]
|