date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/15
| 1,656 | 6,226 |
<issue_start>username_0: Is there a way to get the ISO ALPHA-2 code (country code) from a **country name** such as United Kingdom = GB?
I'm trying to achieve the nearly the opposite of the code below
```
//To get the Country Names from the CultureInfo
foreach (CultureInfo cul in CultureInfo.GetCultures(CultureTypes.SpecificCultures))
{
country = new RegionInfo(new CultureInfo(cul.Name, false).LCID);
countryNames.Add(country.DisplayName.ToString());
}
```<issue_comment>username_1: You can do something like this:
```
var regions = CultureInfo.GetCultures(CultureTypes.SpecificCultures).Select(x => new RegionInfo(x.LCID));
var englishRegion = regions.FirstOrDefault(region => region.EnglishName.Contains("United Kingdom"));
var countryAbbrev = englishRegion.TwoLetterISORegionName;
```
Upvotes: 4 <issue_comment>username_2: In the above your are stripping only the DisplayName from the "country"...
You should be able to get the code from that country value as well since you are looping through each culture and getting the country from the culture name.
Instead of a list of names, why not make a Dictionary of (code, DisplayName)?
You can easily look up a value(display name) from a key(code) or a key(code) from a value(display name), and you have the benefit of maintaining only 1 collection.
Upvotes: 0 <issue_comment>username_3: This Class is used to you collect information on Countries and related Cultures.
Its Constructor lets you choose if you want to load all possible Countries or just the Specific Cultures:
```
CountryList countries = new CountryList([true][false]);
```
where `true` means `CultureTypes.AllCultures` and `false` means `CultureTypes.SpecificCultures`
For example, whith these parameters:
```
CountryList countries = new CountryList(true);
List countryInfo = countries.GetCountryInfoByName("United States", false);
```
(`true`/`false` in `GetCountryInfoByName()` mean use/don't the Native Name)
this method returns three results:
1 Country - United States
3 Cultures - English, English (United States), Spanish (United States)
Using the Native Name:
```
List countryInfo = Countries.GetCountryInfoByName("United States", true);
```
1 Country - United States
2 Cultures - English, English (United States)
With Specific Cultures and Native Names:
```
CountryList countries = new CountryList(false);
List countryInfo = countries.GetCountryInfoByName("United States", true);
```
1 Country - United States
1 Culture - English (United States)
More related to your question, this Class exposes these methods:
```
string twoLettersName = countries.GetTwoLettersName("United States", true);
```
Returns `US`
```
string threeLettersName = countries.GetThreeLettersName("United States", true);
```
Returns `USA`
```
List ietfTags = countries.GetIetfLanguageTag("United States", true);
```
Returns `en-US`
```
List geoIds = countries.GetRegionGeoId("United States", true);
```
Returns `244`
```
public class CountryList {
private CultureTypes cultureType;
public CountryList(bool AllCultures)
{
cultureType = AllCultures ? CultureTypes.AllCultures : CultureTypes.SpecificCultures;
Countries = GetAllCountries(cultureType);
}
public List Countries { get; set; }
public List GetCountryInfoByName(string CountryName, bool NativeName)
{
return NativeName ? Countries.Where(info => info.Region?.NativeName == CountryName).ToList()
: Countries.Where(info => info.Region?.EnglishName == CountryName).ToList();
}
public List GetCountryInfoByName(string CountryName, bool NativeName, bool IsNeutral)
{
return NativeName ? Countries.Where(info => info.Region?.NativeName == CountryName &&
info.Culture?.IsNeutralCulture == IsNeutral).ToList()
: Countries.Where(info => info.Region?.EnglishName == CountryName &&
info.Culture?.IsNeutralCulture == IsNeutral).ToList();
}
public string? GetTwoLettersName(string CountryName, bool NativeName)
{
CountryInfo? country = NativeName ? Countries.Where(info => info.Region?.NativeName == CountryName).FirstOrDefault()
: Countries.Where(info => info.Region?.EnglishName == CountryName).FirstOrDefault();
return country?.Region?.TwoLetterISORegionName;
}
public string? GetThreeLettersName(string CountryName, bool NativeName)
{
CountryInfo? country = NativeName ? Countries.Where(info => info.Region?.NativeName == CountryName).FirstOrDefault()
: Countries.Where(info => info.Region?.EnglishName == CountryName).FirstOrDefault();
return country?.Region?.ThreeLetterISORegionName;
}
public List? GetIetfLanguageTag(string CountryName, bool UseNativeName)
{
return UseNativeName ? Countries.Where(info => info.Region?.NativeName == CountryName)
.Select(info => info.Culture?.IetfLanguageTag).ToList()
: Countries.Where(info => info.Region?.EnglishName == CountryName)
.Select(info => info.Culture?.IetfLanguageTag).ToList();
}
public List? GetRegionGeoId(string CountryName, bool UseNativeName)
{
return UseNativeName ? Countries.Where(info => info.Region?.NativeName == CountryName)
.Select(info => info.Region?.GeoId).ToList()
: Countries.Where(info => info.Region?.EnglishName == CountryName)
.Select(info => info.Region?.GeoId).ToList();
}
private static List GetAllCountries(CultureTypes cultureTypes)
{
List countries = new List();
foreach (CultureInfo culture in CultureInfo.GetCultures(cultureTypes)) {
if (culture.LCID != 127)
countries.Add(new CountryInfo() {
Culture = culture,
Region = new RegionInfo(culture.TextInfo.CultureName)
});
}
return countries;
}
}
public class CountryInfo {
public CultureInfo? Culture { get; set; }
public RegionInfo? Region { get; set; }
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_4: You can use my package [`Nager.Country`](https://www.nuget.org/packages/Nager.Country). I have a functionality that makes this process easy. It is also possible to search for the country with a Spanish name e.x. 'Reino Unido'
```
PM> install-package Nager.Country
```
```cs
ICountryProvider countryProvider = new CountryProvider();
var countryInfo = countryProvider.GetCountryByName("United Kingdom");
//countryInfo.Alpha2Code = GB
```
Upvotes: 1
|
2018/03/15
| 637 | 2,488 |
<issue_start>username_0: I am trying to look at the running mean and running variance of a trained tensorflow model that is exported via GCMLE (`saved_model.pb`, `assets/*` & `variables/*`). Where are these values kept in the graph? I can access gamma/beta values from `tf.GraphKeys.TRAINABLE_VARIABLES` but I have not been able to find the running mean and running variance in any of the `tf.GraphKeys.MODEL_VARIABLES`. Are the running mean and running variance stored somewhere else?
I know that at test time (ie. `Modes.EVAL`), the running mean and running variance are used to normalize the incoming data, then the normalized data is scaled and shifted using gamma and beta. I am trying to look at all of the variables that I need at inference time, but I cannot find the running mean and running variance. Are these only used at test time and not at inference time (`Modes.PREDICT`)? If so, that would explain why I can't find them in the exported model, but I am expecting them to be there.
Based on [tf.GraphKeys](https://www.tensorflow.org/api_docs/python/tf/GraphKeys) I have tried other things like `tf.GraphKeys.MOVING_AVERAGE_VARIABLES` but they are also empty. I also saw this line in the batch\_normalization documentation "Note: when training, the moving\_mean and moving\_variance need to be updated. By default the update ops are placed in `tf.GraphKeys.UPDATE_OPS`, so they need to be added as a dependency to the train\_op." so I then tried looking at `tf.GraphKeys.UPDATE_OPS` from my saved model and they contain an assign op `batch_normalization/AssignMovingAvg:0` but still not clear where I would get the value from.<issue_comment>username_1: It appears that the moving mean and moving variance are stored within `tf.GraphKeys.GLOBAL_VARIABLES` and it looks like the reason nothing showed up in `MODEL_VARIABLES` is because you need to use `tf.contrib.framework.local_variable`
Upvotes: 2 [selected_answer]<issue_comment>username_2: In addition to #username_1's answer,
**if you'd like to take out the moving\_mean, moving\_variance for BatchNorm,**
you can index them with names as follows.
```
vars = tf.global_variables() # shows every variable being used.
vars_moving_mean_variance = []
for var in vars:
if ("moving_mean" in var.name) or ("moving_variance" in var.name):
vars_moving_mean_variance.append(var)
print(vars_moving_mean_variance)
```
---
p.s. Thanks for the question and the answer. I solved my own problem too.
Upvotes: 0
|
2018/03/15
| 346 | 1,228 |
<issue_start>username_0: I would like to remove the top left back button.
I tried to check "Full screen" option, but the arrow is still here.
I want to remove the back button because I have a button next to this button, and I don't want the user tap on it by mistake
Thx
[](https://i.stack.imgur.com/kuuh8.png)<issue_comment>username_1: It appears that the moving mean and moving variance are stored within `tf.GraphKeys.GLOBAL_VARIABLES` and it looks like the reason nothing showed up in `MODEL_VARIABLES` is because you need to use `tf.contrib.framework.local_variable`
Upvotes: 2 [selected_answer]<issue_comment>username_2: In addition to #username_1's answer,
**if you'd like to take out the moving\_mean, moving\_variance for BatchNorm,**
you can index them with names as follows.
```
vars = tf.global_variables() # shows every variable being used.
vars_moving_mean_variance = []
for var in vars:
if ("moving_mean" in var.name) or ("moving_variance" in var.name):
vars_moving_mean_variance.append(var)
print(vars_moving_mean_variance)
```
---
p.s. Thanks for the question and the answer. I solved my own problem too.
Upvotes: 0
|
2018/03/15
| 686 | 2,521 |
<issue_start>username_0: **pretext:**
ChestType is an enum
```
public enum ChestType
{
COMMON,
GEMNGOLDCHEST,
GODLYCHEST,
ASSCHEST
}
```
slotAndChestIndex is a ChestType?[]
chestSlotsAndChestsSaved is an int[]
(Sorry for the bad names)
```
slotAndChestIndex[i] = (ChestType?)chestSlotsAndChestSaved[i] ?? null;
```
I believe the above line says:
"If I cant cast this int to a ChestType? then express it as null"
ChestType? is however being set to a value of -1 which seems weird to me.
Am I doing something wrong here? Event when I try set it to default(ChestType?) it still sets it to -1 lol. I thought the default of any Nullable type is null.
Please tell me what I'm not understanding!<issue_comment>username_1: You can't validate if it is defined within your enum that way; it'll just assign the value of your variable `chestSlotsAndChestSaved[i]` in it (even though it's `-1` and it's not defined in your enum).
The way you can verify this is with:
```
slotAndChestIndex[i] = (ChestType?)(Enum.IsDefined(typeof(ChestType), chestSlotsAndChestSaved[i]) ? chestSlotsAndChestSaved[i] : (int?)null);
```
PS: I haven't tested the code, even though, `Enum.IsDefined(typeof(ChestType), chestSlotsAndChestSaved[i])` is your way to go.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Neither the `??` operator nor the cast you're doing do quite what you think.
To start with, the cast you've used will never produce a null value - casting from an int to (most) enums will simply produce a value of the type of the enum, but potentially with a numeric value, not one of the enum members. The fact that you cast to `ChestType?` rather than `ChestType` doesn't change that. In addition, if a direct cast is performed like you've shown and can't be performed an exception will be raised, and the result will not be null. This should never happen for literal conversions like this, but could happen when casing between classes.
Next, the `??` operator, also known as the null-coalescing operator, evaluates to the left operand if the left operand is not null, and evaluates to the right operand if the left operand is null. So if `chestSlotsAndChestSaved[i]` was equal to `null`, the expression would become the right hand operand - however since the right operand is always `null`, that part of the expression effectively does nothing.
Overall, the most likely reason that `slotAndChestIndex[i]` is coming back as -1 is because `chestSlotsAndChestSaved[i]` is -1.
Upvotes: 2
|
2018/03/15
| 1,009 | 3,335 |
<issue_start>username_0: I am trying to filter through an array of objects to find all values with an image extension then push the values found into their own array.
Example: `imageArray = ["steve.jpg", "funimage1.jpg", "coolimage2.png","greatimage3.svg", "jimmysavatar.jpg" ...]`.
Here is a jsfiddle to test: <https://jsfiddle.net/25pmwsee/>
```js
const myArray = [{
"prepend": false,
"name": "steve",
"avatar": "steve.jpg",
"imgs": [
"funimage1.jpg",
"coolimage2.png",
"greatimage3.svg"
]
},
{
"prepend": false,
"name": "jimmy",
"avatar": "jimmysavatar.jpg",
"imgs": [
"realimage1.jpg",
"awesomeimage2.png",
"coolimage3.svg"
]
}]
const extensions = [".jpg", ".png", ".svg"];
let imageArray = [];
// search in array for extension then push key to array
for (let i = 0; i < extensions.length; i++) {
if ( extensions[i] in myArray ) {
imageArray.push(image)
}
}
```<issue_comment>username_1: ```
// regular expression to match a file extension and capture it
const extensionRegex = /\.([a-z]+)$/
// map of allowed extensions; indexing by any not listed will be falsy (undefined)
const allowedExtensions = {
'jpg': true,
'png': true,
'svg': true
}
// grab each user's avatar
let avatars = myArray.map(function (user) {
return user.avatar
})
// takes all the imgs arrays and flatten them down to an array of strings
let imgs = myArray.map(function (user) {
return user.imgs
}).reduce(function (flattened, images) {
return flattened.concat(images)
}, [])
avatars.concat(imgs).forEach(function (imageName) {
// if imageName is undefined or empty, use empty string instead
// since we know that will fail the allowedExtensions check and not push
let extension = (imageName || '').match(extensionRegex)[1]
if (allowedExtensions[extension]) {
imageArray.push(imageName);
}
});
```
Some reference links:
[How do you access the matched groups in a JavaScript regular expression?](https://stackoverflow.com/questions/432493/how-do-you-access-the-matched-groups-in-a-javascript-regular-expression)
<https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce>
Upvotes: 0 <issue_comment>username_2: Try this, I iterate through object and check if object has property as object then iterate through it and add if find any image.
```js
const myArray = [{
"prepend": false,
"name": "steve",
"avatar": "steve.jpg",
"imgs": [
"funimage1.jpg",
"coolimage2.png",
"greatimage3.svg"
]
},
{
"prepend": false,
"name": "jimmy",
"avatar": "jimmysavatar.jpg",
"imgs": [
"realimage1.jpg",
"awesomeimage2.png",
"coolimage3.svg"
]
}]
const extensions = [".jpg", ".png", ".svg"];
let imageArray = [];
// search in array for extension then push key to array
function iterate(obj){
for(var x in obj){
//console.log(typeof(obj[x]));
if(typeof(obj[x])==='object'){
iterate(obj[x]);
}
else if (obj.hasOwnProperty(x)){
extensions.forEach(function(e){
if(typeof(obj[x])==='string' && obj[x].endsWith(e))
imageArray.push(obj[x]);
})
}
}
}
myArray.forEach(function(x){iterate(x)})
console.log(imageArray);
```
Upvotes: 2 [selected_answer]
|
2018/03/15
| 1,582 | 5,683 |
<issue_start>username_0: ```
my_dic={"Vajk":"vékony","Bibi":'magas'}
my_dic['Bibi']
'magas'
```
We can see, that we need the ' to refer to the value for key 'Bibi'. But if want to use the same format in .format(), it gives this error:
```
print("úgy érzem, hogy Bibi nagyon {0['Bibi']}".format(my_dic))
Traceback (most recent call last):
File "", line 1, in
print("úgy érzem, hogy Bibi nagyon {0['Bibi']}".format(my\_dic))
KeyError: "'Bibi'"
```
I have to use the reference without ', then it works.
```
print("úgy érzem, hogy Bibi nagyon {0[Bibi]}".format(my_dic))
úgy érzem, hogy Bibi nagyon magas
```
Why doesn't the first work, and why does the second? It should be the opposite, first should work, and second shouldn't.<issue_comment>username_1: >
> It should be the opposite, first should work, and second shouldn't.
>
>
>
No, it shouldn't because `'Bibi'` inside a double quoted string is a quoted string not just `Bibi`. You can check this simply as following:
```
In [51]: "'Bibi'" == "Bibi"
Out[51]: False
```
If in first case the key was `"'Bibi'"` Then it'd worked perfectly:
```
In [49]: my_dic={"Vajk":"vékony","'Bibi'":'magas'}
In [50]: print("úgy érzem, hogy Bibi nagyon {0['Bibi']}".format(my_dic))
úgy érzem, hogy Bibi nagyon magas
```
The reason that why it doesn't accept `"Bibi"` in first case and doesn't give you a the expected result is that Python looks for everything between brackets in dictionary and in this case you have `"'Bibi'"` inside the brackets not `'Bibi'`.
Upvotes: 2 <issue_comment>username_2: First, some terminology:
* `'Bibi'` is a *string literal*, syntax to create a string value. Your keys are strings, and using a string literal you can specify one of those keys.
You could use a variable instead; assign as string value to a variable and use the variable to get an item from your dictionary:
```
foo = 'Bibi' # or read this from a file, or from a network connection
print(my_dic[foo]) # -> magas
```
* In string formatting, the `{...}` are *replacement fields*. The `str.format()` method provides values for the replacement fields.
* The `0[Bidi]` part in the `{...}` replacement field syntax is the *field name*. When you use `[...]` in a field name, it is a compound field name (there are multiple parts). The `[...]` syntax is usually referred to as indexing.
The format used in field names is deliberately kept simple, and is only Python-*like*. The syntax is simplified to limit what it can be used for, to constrain the functionality to something that is *usually* safe to use.
As such, if you use a compound name with getitem `[...]` indexing syntax, names are treated as strings, but you don't use quotes to create the string. You could not pass in a variable name anyway, there is no need to contrast between `'name'` (a string) and `name` (a variable).
In other words, in a Python expression, `my_dic[foo]` works by looking up the value of the variable `foo`, and that is a different concept from using a string literal like `my_dic['Bidi']`. But you *can't use variables in a field name*, in a `str.format()` operation, using `{0[foo]}` should never find the variable `foo`.
The original [proposal to add the feature](https://www.python.org/dev/peps/pep-3101/#simple-and-compound-field-names) explains this as:
>
> Unlike some other programming languages, you cannot embed arbitrary expressions in format strings. This is by design - the types of expressions that you can use is deliberately limited. Only two operators are supported: the '.' (getattr) operator, and the '[]' (getitem) operator. The reason for allowing these operators is that they don't normally have side effects in non-pathological code.
>
>
>
and
>
> It should be noted that the use of 'getitem' within a format string is much more limited than its conventional usage. In the above example, the string 'name' really is the literal string 'name', not a variable named 'name'. The rules for parsing an item key are very simple. If it starts with a digit, then it is treated as a number, otherwise it is used as a string.
>
>
>
Keeping the syntax simple makes templates more secure. From the [*Security Considerations* section](https://www.python.org/dev/peps/pep-3101/#security-considerations):
>
> Barring that, the next best approach is to ensure that string formatting has no side effects. Because of the open nature of Python, it is impossible to guarantee that any non-trivial operation has this property. What this PEP does is limit the types of expressions in format strings to those in which visible side effects are both rare and strongly discouraged by the culture of Python developers.
>
>
>
Permitting `{0[foo]}` to look up variables in the current scope could easily produce side effects, while treating `foo` as a string instead means you can know, with certainty, that it'll always be the string `'foo'` and not something else.
If you are *only* using string literals and not dynamic values (so `'some {} template'.format(...)` and not `some_variable.format(...)`, and you are using Python 3.6 or newer, you can use [*formatted string literals*](https://docs.python.org/3/whatsnew/3.6.html#whatsnew36-pep498) instead, as you can then use full Python expressions:
```
print(f"úgy érzem, hogy Bibi nagyon {my_dic['Bibi']}")
```
In an `f` string, you use actual Python expressions instead of field names, so you use quotes again to pass in a string literal. Because they are string literals, they are evaluated right where they are defined, and you as a developer can see what local variables are available, so presumably you know how to keep that secure.
Upvotes: 4 [selected_answer]
|
2018/03/15
| 3,402 | 10,713 |
<issue_start>username_0: I have created a server.keystore and then a client.keyStore with a client.crt which i used to client.truststore
the server.keystore with alias devmyserverkey
```
/myserver_opt/jdk1.8.0_latest/jre/bin/keytool -genkey -alias devmyserverkey -storetype pkcs12 -keyalg RSA -keysize 2048 -keystore myserver.keystore -validity 730 -storepass <PASSWORD> -dname "CN=dev.myserver.com, OU=CMJAVA, O=myserver, L=City, ST=State, C=Country" -keypass <PASSWORD>
```
the client.keystore with alias devclientkey
```
/myserver_opt/jdk1.8.0_latest/jre/bin/keytool -genkey -keystore client.keystore -storepass <PASSWORD> -keyalg RSA -keysize 2048 -storetype pkcs12 -alias devclientkey -dname "CN=dev.myserver.com, OU=CMJAVA, O=myserver, L=City, ST=State, C=Country"
```
then the client crt with alias devclientkey
```
/myserver_opt/jdk1.8.0_latest/jre/bin/keytool -exportcert -keystore client.keystore -storetype pkcs12 -storepass <PASSWORD> -keypass <PASSWORD> -file client.crt -alias devclientkey
```
then the client truststore
```
/myserver_opt/jdk1.8.0_latest/jre/bin/keytool -import -file client.crt -keystore client.truststore
```
then the pkc12 keystore
```
/myserver_opt/jdk1.8.0_latest/jre/bin/keytool -importkeystore -srckeystore client.keystore -destkeystore clientCert.p12 -srcstoretype PKCS12 -deststoretype PKCS12 -deststorepass <PASSWORD>
```
the client.truststore and server.keystore are in the configuration directory on my widlfly instance and when i try to access my application I get the following:
```
2018-03-16 08:23:18,177 TRACE [org.jboss.security] (default task-28) PBOX00200: Begin isValid, principal: org.wildfly.extension.undertow.security.AccountImpl$AccountPrincipal@3e1567dc, cache entry: null 2018-03-16 08:23:18,177 TRACE [org.jboss.security] (default task-28) PBOX00209: defaultLogin, principal: org.wildfly.extension.undertow.security.AccountImpl$AccountPrincipal@3e1567dc 2018-03-16 08:23:18,178 TRACE [org.jboss.security] (default task-28) PBOX00221: Begin getAppConfigurationEntry(mygenwebservicessecurity), size: 6 2018-03-16 08:23:18,179 TRACE [org.jboss.security] (default task-28) PBOX00224: End getAppConfigurationEntry(mygenwebservicessecurity), AuthInfo: AppConfigurationEntry[]: [0] LoginModule Class: org.jboss.security.auth.spi.BaseCertLoginModule ControlFlag: LoginModuleControlFlag: required Options: name=securityDomain, value=mygenwebservicessecurity
2018-03-16 08:23:18,181 TRACE [org.jboss.security] (default task-28) PBOX00236: Begin initialize method 2018-03-16 08:23:18,192 TRACE [org.jboss.security] (default task-28) PBOX00245: Found security domain: org.jboss.security.JBossJSSESecurityDomain 2018-03-16 08:23:18,192 TRACE [org.jboss.security] (default task-28) PBOX00239: End initialize method 2018-03-16 08:23:18,192 TRACE [org.jboss.security] (default task-28) PBOX00240: Begin login method 2018-03-16 08:23:18,192 TRACE [org.jboss.security] (default task-28) PBOX00240: Begin login method 2018-03-16 08:23:18,192 TRACE [org.jboss.security] (default task-28) PBOX00252: Begin getAliasAndCert method 2018-03-16 08:23:18,193 TRACE [org.jboss.security] (default task-28) PBOX00253: Found certificate, serial number: 13e04227, subject DN: CN=dev.myserver.com, OU=CMJAVA, O=myserver, L=City, ST=State, C=Country 2018-03-16 08:23:18,193 TRACE [org.jboss.security] (default task-28) PBOX00255: End getAliasAndCert method 2018-03-16 08:23:18,193 TRACE [org.jboss.security] (default task-28) PBOX00256: Begin validateCredential method 2018-03-16 08:23:18,201 TRACE [org.jboss.security] (default task-28)
PBOX00056: Supplied credential: 13e04227
CN=dev.myserver.com, OU=CMJAVA, O=myserver, L=City, ST=State, C=Country
PBOX00057: Existing credential: PBOX00058: No match for alias CN=dev.myserver.com, OU=CMJAVA, O=myserver, L=City, ST=State, C=Country, existing aliases: [mykey] 2018-03-16 08:23:18,201 TRACE [org.jboss.security] (default task-28) PBOX00260: End validateCredential method, result: false 2018-03-16 08:23:18,201 TRACE [org.jboss.security] (default task-28) PBOX00244: Begin abort method, overall result: false 2018-03-16 08:23:18,201 DEBUG [org.jboss.security] (default task-28) PBOX00206: Login failure: javax.security.auth.login.FailedLoginException: PBOX00052: Supplied credential did not match existing credential for alias CN=dev.myserver.com, OU=CMJAVA, O=myserver, L=City, ST=State, C=Country
at org.jboss.security.auth.spi.BaseCertLoginModule.login(BaseCertLoginModule.java:231)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
```
I do not see any difference between the two. the only difference i see is the alias mykey. that si a default alias. but where is this coming from as I have supplied the alias myself for both.
This is what i added in the standalone.xml
```
```
After adding the above in my standalone.xml, when i try to access my application through http it comes also as Forbidden. but the above exception is the one that comes with https and doesn't make sense.
Edit:
I removed all alias references and made the certs again and checked with
```
keytool -list - v -keystore ( truststore/pC12Store/Server.keystore/client.keystore)
```
they all have the same alias mykey
i even looked into the code for the class throwing the error it seems that the alias coming down to this class is not 'mykey' but the DN definition/Subject<issue_comment>username_1: >
> It should be the opposite, first should work, and second shouldn't.
>
>
>
No, it shouldn't because `'Bibi'` inside a double quoted string is a quoted string not just `Bibi`. You can check this simply as following:
```
In [51]: "'Bibi'" == "Bibi"
Out[51]: False
```
If in first case the key was `"'Bibi'"` Then it'd worked perfectly:
```
In [49]: my_dic={"Vajk":"vékony","'Bibi'":'magas'}
In [50]: print("úgy érzem, hogy Bibi nagyon {0['Bibi']}".format(my_dic))
úgy érzem, hogy Bibi nagyon magas
```
The reason that why it doesn't accept `"Bibi"` in first case and doesn't give you a the expected result is that Python looks for everything between brackets in dictionary and in this case you have `"'Bibi'"` inside the brackets not `'Bibi'`.
Upvotes: 2 <issue_comment>username_2: First, some terminology:
* `'Bibi'` is a *string literal*, syntax to create a string value. Your keys are strings, and using a string literal you can specify one of those keys.
You could use a variable instead; assign as string value to a variable and use the variable to get an item from your dictionary:
```
foo = 'Bibi' # or read this from a file, or from a network connection
print(my_dic[foo]) # -> magas
```
* In string formatting, the `{...}` are *replacement fields*. The `str.format()` method provides values for the replacement fields.
* The `0[Bidi]` part in the `{...}` replacement field syntax is the *field name*. When you use `[...]` in a field name, it is a compound field name (there are multiple parts). The `[...]` syntax is usually referred to as indexing.
The format used in field names is deliberately kept simple, and is only Python-*like*. The syntax is simplified to limit what it can be used for, to constrain the functionality to something that is *usually* safe to use.
As such, if you use a compound name with getitem `[...]` indexing syntax, names are treated as strings, but you don't use quotes to create the string. You could not pass in a variable name anyway, there is no need to contrast between `'name'` (a string) and `name` (a variable).
In other words, in a Python expression, `my_dic[foo]` works by looking up the value of the variable `foo`, and that is a different concept from using a string literal like `my_dic['Bidi']`. But you *can't use variables in a field name*, in a `str.format()` operation, using `{0[foo]}` should never find the variable `foo`.
The original [proposal to add the feature](https://www.python.org/dev/peps/pep-3101/#simple-and-compound-field-names) explains this as:
>
> Unlike some other programming languages, you cannot embed arbitrary expressions in format strings. This is by design - the types of expressions that you can use is deliberately limited. Only two operators are supported: the '.' (getattr) operator, and the '[]' (getitem) operator. The reason for allowing these operators is that they don't normally have side effects in non-pathological code.
>
>
>
and
>
> It should be noted that the use of 'getitem' within a format string is much more limited than its conventional usage. In the above example, the string 'name' really is the literal string 'name', not a variable named 'name'. The rules for parsing an item key are very simple. If it starts with a digit, then it is treated as a number, otherwise it is used as a string.
>
>
>
Keeping the syntax simple makes templates more secure. From the [*Security Considerations* section](https://www.python.org/dev/peps/pep-3101/#security-considerations):
>
> Barring that, the next best approach is to ensure that string formatting has no side effects. Because of the open nature of Python, it is impossible to guarantee that any non-trivial operation has this property. What this PEP does is limit the types of expressions in format strings to those in which visible side effects are both rare and strongly discouraged by the culture of Python developers.
>
>
>
Permitting `{0[foo]}` to look up variables in the current scope could easily produce side effects, while treating `foo` as a string instead means you can know, with certainty, that it'll always be the string `'foo'` and not something else.
If you are *only* using string literals and not dynamic values (so `'some {} template'.format(...)` and not `some_variable.format(...)`, and you are using Python 3.6 or newer, you can use [*formatted string literals*](https://docs.python.org/3/whatsnew/3.6.html#whatsnew36-pep498) instead, as you can then use full Python expressions:
```
print(f"úgy érzem, hogy Bibi nagyon {my_dic['Bibi']}")
```
In an `f` string, you use actual Python expressions instead of field names, so you use quotes again to pass in a string literal. Because they are string literals, they are evaluated right where they are defined, and you as a developer can see what local variables are available, so presumably you know how to keep that secure.
Upvotes: 4 [selected_answer]
|
2018/03/15
| 1,865 | 6,816 |
<issue_start>username_0: ```
while (choice != 6) {
System.out.println(" ");
System.out.println("Rainfall Analysis Menu");
System.out.println("1. Display total rainfall.");
System.out.println("2. Display average daily rainfall.");
System.out.println("3. Display day and amount of greatest rainfall.");
System.out.println("4. Display day and amount of least rainfall.");
System.out.println("5. Display number of days a flood alert was issued.");
System.out.println("6. Quit");
System.out.print("Enter your choice: ");
choice = keyboard.nextDouble();
else if (choice == 4) {
if (rain1 < rain2 && rain1 < rain3 && rain1 < rain4 && rain1 < rain5 && rain1 < rain6 && rain1 < rain7 && rain1 < rain8 && rain1 < rain9 && rain1 < rain10) {
System.out.print("Day 1 had the lowest rainfall with " + rain1 + "inches.");
} else if (rain2 < rain1 && rain2 < rain3 && rain2 < rain4 && rain2 < rain5 && rain2 < rain6 && rain2 < rain7 && rain2 < rain8 && rain2 < rain8 && rain2 < rain9 && rain2 < rain10) {
System.out.print("Day 2 had the lowest rainfall with " + rain2 + " inches.");
} else if (rain3 < rain1 && rain3 < rain2 && rain3 < rain4 && rain3 < rain5 && rain3 < rain6 && rain3 < rain7 && rain3 < rain8 && rain3 < rain9 && rain3 < rain10) {
System.out.print("Day 3 had the lowest rainfall with " + rain3 + " inches.");
} else if (rain4 < rain1 && rain4 < rain2 && rain4
```
I have a program that prompts the user to enter the amount of days they want to analyze, then enter the rainfall amount for those days (1-10 days). If the user wants to find the lowest amount of rainfall they input number 4 into the program which is supposed to print " Day x had the lowest rainfall with x inches. I came up with this solution above but want a way to simplify it to make it simple and shorter.<issue_comment>username_1: >
> It should be the opposite, first should work, and second shouldn't.
>
>
>
No, it shouldn't because `'Bibi'` inside a double quoted string is a quoted string not just `Bibi`. You can check this simply as following:
```
In [51]: "'Bibi'" == "Bibi"
Out[51]: False
```
If in first case the key was `"'Bibi'"` Then it'd worked perfectly:
```
In [49]: my_dic={"Vajk":"vékony","'Bibi'":'magas'}
In [50]: print("úgy érzem, hogy Bibi nagyon {0['Bibi']}".format(my_dic))
úgy érzem, hogy Bibi nagyon magas
```
The reason that why it doesn't accept `"Bibi"` in first case and doesn't give you a the expected result is that Python looks for everything between brackets in dictionary and in this case you have `"'Bibi'"` inside the brackets not `'Bibi'`.
Upvotes: 2 <issue_comment>username_2: First, some terminology:
* `'Bibi'` is a *string literal*, syntax to create a string value. Your keys are strings, and using a string literal you can specify one of those keys.
You could use a variable instead; assign as string value to a variable and use the variable to get an item from your dictionary:
```
foo = 'Bibi' # or read this from a file, or from a network connection
print(my_dic[foo]) # -> magas
```
* In string formatting, the `{...}` are *replacement fields*. The `str.format()` method provides values for the replacement fields.
* The `0[Bidi]` part in the `{...}` replacement field syntax is the *field name*. When you use `[...]` in a field name, it is a compound field name (there are multiple parts). The `[...]` syntax is usually referred to as indexing.
The format used in field names is deliberately kept simple, and is only Python-*like*. The syntax is simplified to limit what it can be used for, to constrain the functionality to something that is *usually* safe to use.
As such, if you use a compound name with getitem `[...]` indexing syntax, names are treated as strings, but you don't use quotes to create the string. You could not pass in a variable name anyway, there is no need to contrast between `'name'` (a string) and `name` (a variable).
In other words, in a Python expression, `my_dic[foo]` works by looking up the value of the variable `foo`, and that is a different concept from using a string literal like `my_dic['Bidi']`. But you *can't use variables in a field name*, in a `str.format()` operation, using `{0[foo]}` should never find the variable `foo`.
The original [proposal to add the feature](https://www.python.org/dev/peps/pep-3101/#simple-and-compound-field-names) explains this as:
>
> Unlike some other programming languages, you cannot embed arbitrary expressions in format strings. This is by design - the types of expressions that you can use is deliberately limited. Only two operators are supported: the '.' (getattr) operator, and the '[]' (getitem) operator. The reason for allowing these operators is that they don't normally have side effects in non-pathological code.
>
>
>
and
>
> It should be noted that the use of 'getitem' within a format string is much more limited than its conventional usage. In the above example, the string 'name' really is the literal string 'name', not a variable named 'name'. The rules for parsing an item key are very simple. If it starts with a digit, then it is treated as a number, otherwise it is used as a string.
>
>
>
Keeping the syntax simple makes templates more secure. From the [*Security Considerations* section](https://www.python.org/dev/peps/pep-3101/#security-considerations):
>
> Barring that, the next best approach is to ensure that string formatting has no side effects. Because of the open nature of Python, it is impossible to guarantee that any non-trivial operation has this property. What this PEP does is limit the types of expressions in format strings to those in which visible side effects are both rare and strongly discouraged by the culture of Python developers.
>
>
>
Permitting `{0[foo]}` to look up variables in the current scope could easily produce side effects, while treating `foo` as a string instead means you can know, with certainty, that it'll always be the string `'foo'` and not something else.
If you are *only* using string literals and not dynamic values (so `'some {} template'.format(...)` and not `some_variable.format(...)`, and you are using Python 3.6 or newer, you can use [*formatted string literals*](https://docs.python.org/3/whatsnew/3.6.html#whatsnew36-pep498) instead, as you can then use full Python expressions:
```
print(f"úgy érzem, hogy Bibi nagyon {my_dic['Bibi']}")
```
In an `f` string, you use actual Python expressions instead of field names, so you use quotes again to pass in a string literal. Because they are string literals, they are evaluated right where they are defined, and you as a developer can see what local variables are available, so presumably you know how to keep that secure.
Upvotes: 4 [selected_answer]
|
2018/03/15
| 1,546 | 5,650 |
<issue_start>username_0: I am trying to use formula to copy a date value from a cell to another. But trying to add date format to `worksheet.write_formula()` causes the excel to to show the following alert
[](https://i.stack.imgur.com/dpuEK.png)
Here is a sample code I am using:
```
import xlsxwriter
from datetime import date
workbook = xlsxwriter.Workbook("testFile.xlsx")
worksheet = workbook.add_worksheet()
dateFmt = workbook.add_format({'num_format': 'mm/dd/yyyy'})
worksheet.write(0, 0, date(2018, 12, 26),dateFmt)
worksheet.write_formula('B1','=A1', dateFmt, date(2018, 12, 26))
workbook.close()
```<issue_comment>username_1: >
> It should be the opposite, first should work, and second shouldn't.
>
>
>
No, it shouldn't because `'Bibi'` inside a double quoted string is a quoted string not just `Bibi`. You can check this simply as following:
```
In [51]: "'Bibi'" == "Bibi"
Out[51]: False
```
If in first case the key was `"'Bibi'"` Then it'd worked perfectly:
```
In [49]: my_dic={"Vajk":"vékony","'Bibi'":'magas'}
In [50]: print("úgy érzem, hogy Bibi nagyon {0['Bibi']}".format(my_dic))
úgy érzem, hogy Bibi nagyon magas
```
The reason that why it doesn't accept `"Bibi"` in first case and doesn't give you a the expected result is that Python looks for everything between brackets in dictionary and in this case you have `"'Bibi'"` inside the brackets not `'Bibi'`.
Upvotes: 2 <issue_comment>username_2: First, some terminology:
* `'Bibi'` is a *string literal*, syntax to create a string value. Your keys are strings, and using a string literal you can specify one of those keys.
You could use a variable instead; assign as string value to a variable and use the variable to get an item from your dictionary:
```
foo = 'Bibi' # or read this from a file, or from a network connection
print(my_dic[foo]) # -> magas
```
* In string formatting, the `{...}` are *replacement fields*. The `str.format()` method provides values for the replacement fields.
* The `0[Bidi]` part in the `{...}` replacement field syntax is the *field name*. When you use `[...]` in a field name, it is a compound field name (there are multiple parts). The `[...]` syntax is usually referred to as indexing.
The format used in field names is deliberately kept simple, and is only Python-*like*. The syntax is simplified to limit what it can be used for, to constrain the functionality to something that is *usually* safe to use.
As such, if you use a compound name with getitem `[...]` indexing syntax, names are treated as strings, but you don't use quotes to create the string. You could not pass in a variable name anyway, there is no need to contrast between `'name'` (a string) and `name` (a variable).
In other words, in a Python expression, `my_dic[foo]` works by looking up the value of the variable `foo`, and that is a different concept from using a string literal like `my_dic['Bidi']`. But you *can't use variables in a field name*, in a `str.format()` operation, using `{0[foo]}` should never find the variable `foo`.
The original [proposal to add the feature](https://www.python.org/dev/peps/pep-3101/#simple-and-compound-field-names) explains this as:
>
> Unlike some other programming languages, you cannot embed arbitrary expressions in format strings. This is by design - the types of expressions that you can use is deliberately limited. Only two operators are supported: the '.' (getattr) operator, and the '[]' (getitem) operator. The reason for allowing these operators is that they don't normally have side effects in non-pathological code.
>
>
>
and
>
> It should be noted that the use of 'getitem' within a format string is much more limited than its conventional usage. In the above example, the string 'name' really is the literal string 'name', not a variable named 'name'. The rules for parsing an item key are very simple. If it starts with a digit, then it is treated as a number, otherwise it is used as a string.
>
>
>
Keeping the syntax simple makes templates more secure. From the [*Security Considerations* section](https://www.python.org/dev/peps/pep-3101/#security-considerations):
>
> Barring that, the next best approach is to ensure that string formatting has no side effects. Because of the open nature of Python, it is impossible to guarantee that any non-trivial operation has this property. What this PEP does is limit the types of expressions in format strings to those in which visible side effects are both rare and strongly discouraged by the culture of Python developers.
>
>
>
Permitting `{0[foo]}` to look up variables in the current scope could easily produce side effects, while treating `foo` as a string instead means you can know, with certainty, that it'll always be the string `'foo'` and not something else.
If you are *only* using string literals and not dynamic values (so `'some {} template'.format(...)` and not `some_variable.format(...)`, and you are using Python 3.6 or newer, you can use [*formatted string literals*](https://docs.python.org/3/whatsnew/3.6.html#whatsnew36-pep498) instead, as you can then use full Python expressions:
```
print(f"úgy érzem, hogy Bibi nagyon {my_dic['Bibi']}")
```
In an `f` string, you use actual Python expressions instead of field names, so you use quotes again to pass in a string literal. Because they are string literals, they are evaluated right where they are defined, and you as a developer can see what local variables are available, so presumably you know how to keep that secure.
Upvotes: 4 [selected_answer]
|
2018/03/15
| 514 | 1,949 |
<issue_start>username_0: I am trying to have the time display every second in an HTML page, in a text box, but I just get the error message that getElementByID does not exist. My code is below. Nothing displays with the code below. can you please correct me, or point out what I am missing?
```
function getTime() {
var currentDate = new Date();
var date = currentDate.toLocaleString();
document.getElementById("clock").innerHTML = date;
}
var repeatedTime = setInterval("getTime()", 1000);
getTime();
```
Here is my HTML code:
```
```
Thank you to <NAME>, for he solved this problem. The correct javascript which works with the HTML is below. I needed to change document.getElementById("clock").innerHTML to .value.
```
function getTime() {
var dateObject = new Date();
var dateString = dateObject.toLocaleString();
document.getElementById("clock").value = dateString;
}
var repeatedTime = setInterval(getTime, 1000);
getTime();
```<issue_comment>username_1: Your syntax is slightly off - it's
```
getElementById
```
Rather than
```
getElementByID
```
The latter doesn't exist as a method.
Upvotes: 1 <issue_comment>username_2: Four issues
* Use `getElementById` instead of getElementByID ("d" needs to be lowercase")
* Use `document.getElementById("clock").value` instead of `innerHTML` to manipulate the contents of a textbox
* Try `setInterval(getTime, 1000);` instead of `setInterval("getTime()", 1000);`
* Move the tag to the bottom of the page, just before the tag. That way the element being acted upon is rendered and available to be acted upon when the script executes.
For a quick test, you can paste this into your browser console and watch the time tick away.
```
function getTime() {
var currentDate = new Date();
var date = currentDate.toLocaleString();
console.log(date);
}
var repeatedTime = setInterval(getTime, 1000);
getTime();
```
Upvotes: 3 [selected_answer]
|
2018/03/15
| 545 | 2,052 |
<issue_start>username_0: We have purchased a react theme with the intent to use its component in our web application, and we ended up with some issues getting them to transpile, we got that solved. However, we're now running into issues with types.
I believe the issue here is that typescript doesn't like that props doesn't have a type defined on it and is defaulting to IntrinsicAttributes. I'd like to be able to use the components without modification.
**The error:**
```
(TS) Property 'content' does not exist on type 'IntrinsicAttributes & IntrinsicClassAttributes & Readonly<{ children?: ReactNode; }> & Read...'
```
**home.tsx**
```
import * as React from 'react';
import { RouteComponentProps } from 'react-router-dom';
import Card from './test';
type HomeProps = RouteComponentProps<{}>;
export default class Home extends React.Component {
public render() {
return
}
}
```
**test.jsx**
```
import React, { Component } from 'react';
class Card extends Component {
render() {
return
{this.props.content}
;
}
}
export default Card;
```<issue_comment>username_1: Your syntax is slightly off - it's
```
getElementById
```
Rather than
```
getElementByID
```
The latter doesn't exist as a method.
Upvotes: 1 <issue_comment>username_2: Four issues
* Use `getElementById` instead of getElementByID ("d" needs to be lowercase")
* Use `document.getElementById("clock").value` instead of `innerHTML` to manipulate the contents of a textbox
* Try `setInterval(getTime, 1000);` instead of `setInterval("getTime()", 1000);`
* Move the tag to the bottom of the page, just before the tag. That way the element being acted upon is rendered and available to be acted upon when the script executes.
For a quick test, you can paste this into your browser console and watch the time tick away.
```
function getTime() {
var currentDate = new Date();
var date = currentDate.toLocaleString();
console.log(date);
}
var repeatedTime = setInterval(getTime, 1000);
getTime();
```
Upvotes: 3 [selected_answer]
|
2018/03/15
| 397 | 1,312 |
<issue_start>username_0: If I do this:
```
my_string = 'nnn';
my_string = my_string.replace(/nn/g, 'n*n');
return my_string; //n*nn
```
I get this result `n*nn`, but I need to insert asterisk `*` between every occasion of `nn`. The result should be `n*n*n`.
If `my_string = 'nnnn'`, the result will be `n*nn*n`, but should be `n*n*n*n`.<issue_comment>username_1: Your syntax is slightly off - it's
```
getElementById
```
Rather than
```
getElementByID
```
The latter doesn't exist as a method.
Upvotes: 1 <issue_comment>username_2: Four issues
* Use `getElementById` instead of getElementByID ("d" needs to be lowercase")
* Use `document.getElementById("clock").value` instead of `innerHTML` to manipulate the contents of a textbox
* Try `setInterval(getTime, 1000);` instead of `setInterval("getTime()", 1000);`
* Move the tag to the bottom of the page, just before the tag. That way the element being acted upon is rendered and available to be acted upon when the script executes.
For a quick test, you can paste this into your browser console and watch the time tick away.
```
function getTime() {
var currentDate = new Date();
var date = currentDate.toLocaleString();
console.log(date);
}
var repeatedTime = setInterval(getTime, 1000);
getTime();
```
Upvotes: 3 [selected_answer]
|
2018/03/15
| 529 | 1,569 |
<issue_start>username_0: I am facing an issue while selecting the following following timespan :
```
t:([] date:2#.z.d ; time: 10D21:28:47.425287000 10D12:18:23.287989000 )
date time
--------------------------------
2018.03.15 10D21:28:47.425287000
2018.03.15 10D12:18:23.287989000
```
when i run the following query, i am not getting the second record back
```
select from t where time within (12:00;13:00)
```
I am expecting the 2nd record from the table :
```
date time
-------------------------------
2018.03.15 10D12:18:23.287989000
```<issue_comment>username_1: Your syntax is slightly off - it's
```
getElementById
```
Rather than
```
getElementByID
```
The latter doesn't exist as a method.
Upvotes: 1 <issue_comment>username_2: Four issues
* Use `getElementById` instead of getElementByID ("d" needs to be lowercase")
* Use `document.getElementById("clock").value` instead of `innerHTML` to manipulate the contents of a textbox
* Try `setInterval(getTime, 1000);` instead of `setInterval("getTime()", 1000);`
* Move the tag to the bottom of the page, just before the tag. That way the element being acted upon is rendered and available to be acted upon when the script executes.
For a quick test, you can paste this into your browser console and watch the time tick away.
```
function getTime() {
var currentDate = new Date();
var date = currentDate.toLocaleString();
console.log(date);
}
var repeatedTime = setInterval(getTime, 1000);
getTime();
```
Upvotes: 3 [selected_answer]
|
2018/03/15
| 562 | 1,740 |
<issue_start>username_0: I am struggling with getting the result back for a dynamic query that is being called from a different stored procedures. Here is what I am trying to achieve.
Procedure A:
```
CREATE PROCEDURE A
@C1 int, @F1 int
AS
SET @SQL = 'SELECT ID FROM EMPLOYEE_TABLE WHERE '+@C1+' = +'@F1'
EXEC(@SQL)
```
Procedure B:
```
CREATE PROCEDURE B
@C1 int, @F1 int
AS
DECLARE @Result INT
EXEC @Result = A @C1, @F1
```
I need to run stored procedure B and let it return back to me the result. I just cannot seem to get the correct result back. How can I fix this problem?<issue_comment>username_1: You can try the following two store procedures query statement
Procedure A:
```
ALTER PROCEDURE A
@C1 VARCHAR(250),
@F1 int
AS
DECLARE @SQL AS VARCHAR(MAX);
SET @SQL = 'SELECT ID FROM PatientTest WHERE '+ @C1+' = ' + CONVERT(VARCHAR(12),@F1)
EXEC(@SQL)
```
Procedure B:
```
ALTER PROCEDURE B
@C1 VARCHAR(250),
@F1 int
AS
Declare @Result int
EXEC @Result = A @C1, @F1
```
If you will face further problem, please let me know in comment. Thanks.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try these two. I think you will meet your expected result.
**Procedure 1**
```
CREATE PROCEDURE GetValue
@ColumnName VARCHAR(250),
@ColumnValue VARCHAR(250)
AS
DECLARE @SQL AS VARCHAR(MAX);
SET @SQL = 'SELECT Email FROM Person WHERE '+ @ColumnName + ' = ''' + @ColumnValue + ''''
EXEC (@SQL)
-- EXEC GetValue 'MobileNo', '+8801919111333'
```
**Procedure 2**
```
CREATE PROCEDURE ReturnValue
@ColumnName VARCHAR(250),
@ColumnValue VARCHAR(250)
AS
DECLARE @Result VARCHAR(250)
EXEC @Result = GetValue @ColumnName, @ColumnValue
-- EXEC ReturnValue 'MobileNo', '+8801919111333'
```
Upvotes: 0
|
2018/03/15
| 7,383 | 19,058 |
<issue_start>username_0: In flow there is support for `$Compose` functions (see [recompose as example](https://github.com/acdlite/recompose/pull/523/files#diff-a3b2ae446242082dc9e5d0161d35703fL136)). However, I can't seem to find such a mechanism in typescript. it seems that the best typescript can do is something like <https://github.com/reactjs/redux/blob/master/index.d.ts#L416-L460>. What's the equivalent to `$Compose` in Typescript?
EDIT: What I'm trying to accomplish is to type the `compose` function from `recompose` or `redux` such that it's typesafe. In particular, with react higher order components, I want to ensure that the output props of one HOC satisfy the input props of the next HOC. This is my current workaround and seems to work reasonably well - though I was hoping there is a good way to do this natively in typescript.
```
/** Wraps recompose.compose in a type-safe way */
function composeHOCs(
f1: InferableComponentEnhancerWithProps,
f2: InferableComponentEnhancerWithProps,
): ComponentEnhancer
function composeHOCs(
f1: InferableComponentEnhancerWithProps,
f2: InferableComponentEnhancerWithProps,
f3: InferableComponentEnhancerWithProps,
): ComponentEnhancer
function composeHOCs(
f1: InferableComponentEnhancerWithProps,
f2: InferableComponentEnhancerWithProps,
f3: InferableComponentEnhancerWithProps,
f4: InferableComponentEnhancerWithProps,
): ComponentEnhancer
function composeHOCs(
...fns: Array>
): ComponentEnhancer {
return compose(...fns)
}
```<issue_comment>username_1: I read your question as follows:
>
> How can I give a TS type to this higher-order function, such that the type of `x` is allowed to vary across the loop?
>
>
>
> ```
> function compose(...funs) {
> return function(x) {
> for (var i = funs.length - 1; i >= 0; i--) {
> x = funs[i](x);
> }
> return x;
> }
> }
>
> ```
>
>
The bad news is that you can't type this function directly. The `funs` array is the problem - to give `compose` its most general type, `funs` should be a *type-aligned* list of functions - the output of each function must match the input of the next. TypeScript’s arrays are homogeneously typed - each element of `funs` must have exactly the same type - so you can't directly express the way the types vary throughout the list in TypeScript. (The above JS works at runtime because the types are erased and data is represented uniformly.) That's why Flow's `$Compose` is a special built-in type.
One option to work around this is to do what you've done in your example: declare a bunch of overloads for `compose` with varying numbers of parameters.
```
function compose(
f : (x : T2) => T3,
g : (x : T1) => T2
) : (x : T1) => T3
function compose(
f : (x : T3) => T4,
g : (x : T2) => T3,
h : (x : T1) => T2
) : (x : T1) => T4
function compose(
f : (x : T4) => T5,
g : (x : T3) => T4,
h : (x : T2) => T3,
k : (x : T1) => T2
) : (x : T1) => T5
```
Clearly this doesn't scale. You have to stop somewhere, and woe betide your users if they need to compose more functions than you expected.
Another option is to rewrite your code such that you only compose functions one at a time:
```
function compose(g : (y : U) => R, f : (x : T) => U) : (x : T) => R {
return x => f(g(x));
}
```
This rather muddies up the calling code - you now have to write the word `compose`, and its attendant parentheses, O(n) times.
```
compose(f, compose(g, compose(h, k)))
```
Function composition pipelines like this are common in functional languages, so how do programmers avoid this syntactic discomfort? In Scala, for example, `compose` is an *infix* function, which makes for fewer nested parentheses.
```
f.compose(g).compose(h).compose(k)
```
In Haskell, `compose` is spelled `(.)`, which makes for very terse compositions:
```
f . g . h . k
```
---
You can in fact hack together an infix `compose` in TS. The idea is to wrap the underlying function in an object with a method which performs the composition. You could call that method `compose`, but I'm calling it `_` because it's less noisy.
```
class Comp {
readonly apply : (x : T) => U
constructor(apply : (x : T) => U) {
this.apply = apply;
}
// note the extra type parameter, and that the intermediate type T is not visible in the output type
\_(f : (x : V) => T) : Comp {
return new Comp(x => this.apply(f(x)))
}
}
// example
const comp : (x : T) => R = new Comp(f).\_(g).\_(h).\_(k).apply
```
Still not as neat as `compose(f, g, h, k)`, but it's not too hideous, and it scales better than writing lots of overloads.
Upvotes: 4 <issue_comment>username_2: Here's an example of a strong typed compose function in TypeScript. It has the shortcoming of not checking each intermediate function type, but it is able to derive the arg and return type for the final composed function.
**Compose Function**
```
/** Helper type for single arg function */
type Func = (a: A) => B;
/\*\*
\* Compose 1 to n functions.
\* @param func first function
\* @param funcs additional functions
\*/
export function compose<
F1 extends Func,
FN extends Array>,
R extends
FN extends [] ? F1 :
FN extends [Func] ? (a: A) => ReturnType :
FN extends [any, Func] ? (a: A) => ReturnType :
FN extends [any, any, Func] ? (a: A) => ReturnType :
FN extends [any, any, any, Func] ? (a: A) => ReturnType :
FN extends [any, any, any, any, Func] ? (a: A) => ReturnType :
Func> // Doubtful we'd ever want to pipe this many functions, but in the off chance someone does, we can still infer the return type
>(func: F1, ...funcs: FN): R {
const allFuncs = [func, ...funcs];
return function composed(raw: any) {
return allFuncs.reduceRight((memo, func) => func(memo), raw);
} as R
}
```
**Example Usage:**
```
// compiler is able to derive that input type is a Date from last function
// and that return type is string from the first
const c: Func = compose(
(a: number) => String(a),
(a: string) => a.length,
(a: Date) => String(a)
);
const result: string = c(new Date());
```
**How it Works**
We use a reduceRight on an array of functions to feed input through each function from last to first. For the return type of compose we are able to infer the argument type based on the last function argument type and the final return type from the return type of the first function.
**Pipe Function**
We can also create a strong typed pipe function that pipes data through first function then to next, etc.
```
/**
* Creates a pipeline of functions.
* @param func first function
* @param funcs additional functions
*/
export function pipe<
F1 extends Func,
FN extends Array>,
R extends
FN extends [] ? F1 :
F1 extends Func ?
FN extends [any] ? Func> :
FN extends [any, any] ? Func> :
FN extends [any, any, any] ? Func> :
FN extends [any, any, any, any] ? Func> :
FN extends [any, any, any, any, any] ? Func> :
Func // Doubtful we'd ever want to pipe this many functions, but in the off chance someone does, we can infer the arg type but not the return type
: never
>(func: F1, ...funcs: FN): R {
const allFuncs = [func, ...funcs];
return function piped(raw: any) {
return allFuncs.reduce((memo, func) => func(memo), raw);
} as R
}
```
**Example Usage**
```
// compile is able to infer arg type of number based on arg type of first function and
// return type based on return type of last function
const c: Func = pipe(
(a: number) => String(a),
(a: string) => Number('1' + a),
(a: number) => String(a)
);
const result: string = c(4); // yields '14'
```
Upvotes: 2 <issue_comment>username_3: As of Typescript 4, variadic tuple types provide a way to compose a function, whos signature is inferred from an arbitrary number of input functions.
```js
let compose = (...args: readonly [
(x: T) => any, // 1. The first function type
...any[], // 2. The middle function types
(x: any) => V // 3. The last function type
]): (x: V) => T => // The compose return type, aka the composed function signature
{
return (input: V) => args.reduceRight((val, fn) => fn(val), input);
};
let pipe = (...args: readonly [
(x: T) => any, // 1. The first function type
...any[], // 2. The middle function types
(x: any) => V // 3. The last function type
]): (x: T) => V => // The pipe return type, aka the composed function signature
{
return (input: T) => args.reduce((val, fn) => fn(val), input);
};
```
However there are still two drawbacks with this implementation:
1. The compiler can not verify that the output of each function matches the input of the next
2. The compiler complains when using the spread operator (but still successfully infers the composed signature)
e.g. The following will work at compile time and at runtime
```
let f = (x: number) => x * x;
let g = (x: number) => `1${x}`;
let h = (x: string) => ({x: Number(x)});
let foo = pipe(f, g, h);
let bar = compose(h, g, f);
console.log(foo(2)); // => { x: 14 }
console.log(bar(2)); // => { x: 14 }
```
While this will complain at runtime, but infer the signature correctly and run
```
let fns = [f, g, h];
let foo2 = pipe(...fns);
console.log(foo2(2)); // => { x: 14 }
```
Upvotes: 2 <issue_comment>username_4: I found that it's not too hard to write a typed compose function **now** (TypeScript v4.1.5 and above, tested in [TypeScript Playground](https://www.typescriptlang.org/play?#code/C4TwDgpgBAwg9gWzHAzhAPAMQHxQLwBQUxUAFJlBAB7AQB2AJilANoCWdAZhAE5SYBGADRQO3PpgBMIgHRyxvKACUAygF0oAfiIldpVZRr0mrDdt0WS5AYdqNmpOTICGPAOYoAXKK6KACgIAlPi4CnxKNuaW0cTkkrbGDk6uHt5hUH6SwXihvuHxUTFFZCwRGtR2JplaOsV1ZMnuXhlBIcrxnrX1RXQQAG68wZ3d0b0DPENd9WODUMN18EioGCyLyGjoLIIiUmrYsnKqe5MxMxMA3AQEoJCwiOsQAIJN6AAquHgZrs4IELQ8KHQ<KEY>)). Here's an example. It can check each intermediate function type.
```js
type Compose =
(F extends [infer F1, infer F2, ...infer RS] ?
(RS extends [] ?
(F1 extends (...args: infer P1) => infer R1 ?
(F2 extends (...args: infer P2) => infer R2 ?
([R1] extends P2 ?
(...args: P1) => R2 :
never) :
never) :
never) :
Compose<[Compose<[F1, F2]>, ...RS]>) :
never);
type ComposeArgs = Parameters>;
type ComposeReturn = ReturnType>;
// I forget that composition is from right to left!
type Reverse = T extends [infer F, ...infer RS] ? Reverse : RE;
function composeL2R(...fns: T): (...args: ComposeArgs) => ComposeReturn {
return (...args: ComposeArgs): ComposeReturn => fns.reduce((acc: unknown, cur: Function) => cur(acc), args);
}
function compose(...fns: T): (...args: ComposeArgs>) => ComposeReturn> {
return (...args: ComposeArgs>): ComposeReturn> => fns.reduceRight((acc: unknown, cur: Function) => cur(acc), args);
}
function fns(x: number): string { return `${x}0`; }
function fnn(x: number): number { return 2 \* x; }
function fsn(x: string): number { return parseInt(x); }
let aNumber = compose(fsn, fns, fnn, fsn, fns, () => 1)();
let aNumberL2R = composeL2R(() => 1, fns, fsn, fnn, fns, fsn)();
let aNever = composeL2R(fnn, fsn, fns)(1);
let aNeverL2R = composeL2R(fnn, fsn, fns)(1);
```
Upvotes: 1 <issue_comment>username_5: TypeScript 4's [tuple type improvements](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-4-0.html#variadic-tuple-types) can be leveraged to type `pipe` and `compose` functions, without defining a list of overrides.
The compiler will ensure that each function can be invoked with the following function as expected (each intermediate function is type checked).
[TypeScript Playground with the examples below](https://www.typescriptlang.org/play?ssl=17&ssc=37&pln=1&pc=1#code/GYVwdgxgLglg9mABAQwCaoMwAoAeAuRMEAWwCMBTAJwEoCiyrEBvAWAChFFLyoRKkciANSIM7AL7t2oSLASIAzlEowwAcxjAAnrjokKNAkpXrm7Tt179EOAHRQ4AZWWq1WagG4JUtjOjwkEAAHIKoIZAVyXUUXdVoYkzUzDi4ePgF7OABVEKoAYQioz282aXB-eRhUcjBYKC0AHgAVAD4sKAIm+KbkizTrKC82SVK2etDELLBkSi0AMXK5JABeRGjkMC1qRGWWlE2fAHoAKkQABRgJ48P2cfJzy-JkUgAbcga5sD3V80RPxHIOCgNVQClSaAQLy0iAA2lMZvNFgEALqIAD8fyQeF+-0BwLAoPBqEh0JhtnJqmAjAASuQlAAafZaVFo359CFgKGw8m2C6hZ5vBq0pQtRlYGZqAh88i0qxgIV0qAtba7JmoujkABujHMty0E2lZxmyGICg+YDBeJBYO4HK58NmC1kARhyO+52NxB4VDNnwUMIADG7A8ifHcHqFZelzQp3X6AUDrUSSdzyRstK7GZTGAAZCJQBP4wkOxHOhAs35R-hNfXvPMixAa7WUHx+JaIIKPGOFpO24mc6Elp0VMCuto84AWqWPAXvP3K3qpOWINsBcWUSUR8hGygm30W5XTyP9eXzxd9ZeThS2bioEAQKLii<KEY>wgA).
```
type UnaryFunction = (x: any) => any
type Composable =
Fn extends readonly [UnaryFunction] ? Fn :
Fn extends readonly [any, ...infer Rest extends readonly UnaryFunction[]] ?
readonly [(arg: ComposeReturn) => any, ...Composable, ] : never
type ComposeReturn = ReturnType
type ComposeParams = Fns extends readonly [...any[], infer Last extends UnaryFunction] ?
Parameters[0] : never
function compose(...fns: Composable) {
return function(arg: ComposeParams): ComposeReturn {
return fns.reduceRight((acc, cur) => cur(acc), arg) as ComposeReturn;
}
}
```
Example:
```
function add3(x: number): number {
return x + 3
}
function uppercase(x: string): string {
return x.toUpperCase();
}
function stringify(x: number): string {
return x.toString();
}
const composed = compose(
uppercase,
stringify,
add3,
);
console.log(composed(0));
```
One notable limitation is that TypeScript still can't infer generic function Parameter and Return types. As of TypeScript 4.7 you can help the compiler by using [instantiation expressions](https://devblogs.microsoft.com/typescript/announcing-typescript-4-7/#instantiation-expressions).
```
function add3(x: number): number {
return x + 3
}
function stringify(x: number): string {
return x.toString();
}
function identity(t: T): T {
return t;
}
const composed = compose(
stringify,
// have to use Instantiation Expressions from TS 4.7 when using generics
identity,
add3,
);
console.log(composed(0));
```
Upvotes: 2 <issue_comment>username_6: I did some digging around and found a wonderful recursive solution by '@cartersnook6139' in the comments section to [Matt Pocock's video](https://youtu.be/D1a8OoBWi1g) on a typed compose function. Here's the link to the [Typescript Playground](https://www.typescriptlang.org/play?ts=4.9.4#code/CYUwxgNghgTiAEYD2A7AzgF3gSQHIDUBBAGWwBEB9AYQHkBZABRoGVCAhYgUWoAlC8AXPACuKAJYBHYQjQBPALYAjJBADcAKHUZZABwRUk8nfAC88ABSwA5kKgpZASlMA%20eHdkatuhNjT4oEGLAVAAWUGIoADwAKvAgAB4YICjAaBaWMDbwKCAAbiAwTiau7g4A2gC6ribqAJCxCUkpaWURAGYF8AAkAGJiMJhxicmp8AZGADTw7Z1dzOCowENNo%20M6UwB0WzMw3QBKIIONI2lrlRV1tQD88GUHGMIwKNHekb39mM4Vyye3DLBQeQgJIDN7zZApZxlAAMFQutQRN18-kCwTCEUiZTmCxSmy2XQOmCqlyE5gaw2at3ceI2O32hww3xuBIZZQARBBklYMCE2d8hDl8oU6kIMDBpJ5tHp4AcdNAwCAejBDGwoGAANYxH6U0TqlBIADuKEqUxobTaaGB2tGKGESgKU2wSXkUy6ZBgUDaWGOOpQesNxu%20Zkq1Tqbo9XvZnJQ3N53x9ozNFuBl2Z7s93opozK1PgW1pKA6uy6hC9BXhiNu%20eijudNJLZZgFYFeQKIvg5JWLXzdK6bBAbSQcCmvadIHkFZusvliuV8lVGre-cHw-gSctGFr46mWLHLrz%20PTXuJtRbQsl3jcaGAbVMMpAcrVs5Vas1ZQAjFMAExTADMUwAFimABWKYADYpgAdimAAOKYAE4Kj-KY2RCEBZDZZxPDaUQwAwMRUEQQwdCQS1IjWUioEUTk0gTFo1hpM44Wccx1HgA8NmQIxKOow4BEuZEAiCUJwiiCi0ComjXDoixe3XK0ZNte1hQRa570fBUlRfRdxMkw5XXkzd4DZPAiFIShaEYFh2C4TCSTGYieJo9QnAAbzY%20A4AeJ4LA89i2g%20DAyCgDAoCEf4PSBEE0HIxyJN4tAYSqJKPIcIRdIS61uy2dwTWmQtZmIKAjizNIMiyQUCiKEp7Audj2KnYFHmeV4uiKz4-PgM9OmKeB3Pq%20BOSwYAQrCtx7DvAKBiC0aNAGlcLAhQYuJI%20LOXgJBbxWpzDjczr2JG0K73Mba1oQYrxsccxDqgBw5vqgBfTqvOa%20Abvuh6NCe9QlqwKBgGAGgcmOsalMUKqBTtcHdl6qB4AAangd8NF%207I7WiJBmDFCIrBByHlLS%20BMBgHGXDcDYMEx7GY3MO6ftQQZiZxjHcDtPGiepqxCbBnrXB0WBLR6CAkBCyw6fUFBAUOfmFQcmA4DwiBZH%20NA0FJ-qiPQLBcmhO9TpASwAaBkApiUjGsZJmMpiZmMWbtO74A8gB6J2BoAPSuTR2NR3J3z1uKDf%20wGcimIPjdDo2Q7cSOTej4PY7NqnLa5%206Xfdz2PJ9r9-e4y1zETi2cetzm7f3MOcjp9i0-qj2vc1wZcl-HPVsDmOI-j9vjcr%20Bq-Y2vvsloE0Bl-QhwVjAlZ6cJAhjPrnddgQkHgHl0LcPDhACJWiZCQ1l7QuJ5aHbJGmXpeV-y5B5fALBUBAABCTOA8NjuOeT0vu-1-P0aTou467%207P42ysKXYub8kCs3kB-J%205dY4wM7lHOBr9mbgPtgA6Bbc-4IIwYgoBIDMEgDpk9IAA). Its magic!
```
declare const INVALID_COMPOSABLE_CHAIN: unique symbol;
type Comp = (arg: any) => any;
type IsValidChain any)[]> =
T extends [infer $First extends Comp, infer $Second extends Comp, ...infer $Rest extends Comp[]]
? [ReturnType<$First>] extends [Parameters<$Second>[0]]
? IsValidChain<[$Second, ...$Rest]>
: (T extends [any, ...infer $Rest] ? $Rest["length"] : never)
: true;
type ReplaceFromBack =
$Draft["length"] extends Offset
? $Draft extends [any, ...infer $After]
? [...T, Item, ...$After]
: never
: T extends [...infer $Before, infer $Item]
? ReplaceFromBack<$Before, Offset, Item, [$Item, ...$Draft]>
: never;
type asdf = ReplaceFromBack<[1, 2, 3, 4, 5, 6, 7, 8, 9], 3, "hey">;
function compose(
...composables:
IsValidChain extends (infer $Offset extends number)
? ReplaceFromBack
: Composables
) {
return (
firstData: Parameters[0]
): Composables extends [...any[], infer $Last extends (arg: never) => any]
? ReturnType<$Last>
: never => {
let data: any = firstData;
for (const composable of composables) {
data = (composable as any)(data);
}
return data;
};
}
const addOne = (a: number): number => a + 1;
const numToString = (a: number): string => a.toString();
const stringToNum = (a: string): number => parseFloat(a);
namespace CorrectlyPassing {
const v0 = compose(addOne, numToString, stringToNum);
// ^?
const v1 = compose(addOne, addOne, addOne, addOne, addOne, numToString);
// ^?
const v2 = compose(numToString, stringToNum, addOne);
// ^?
const v3 = compose(addOne, addOne, addOne);
// ^?
}
namespace CorrectlyFailing {
// :o they actually show the error next to the incorrect one!
compose(addOne, stringToNum);
compose(numToString, addOne);
compose(stringToNum, stringToNum);
compose(addOne, addOne, addOne, addOne, stringToNum);
compose(addOne, addOne, addOne, addOne, stringToNum, addOne);
}
```
Upvotes: 0
|
2018/03/15
| 806 | 2,611 |
<issue_start>username_0: What I am trying to do is, to be able to select rows which do not have a specific class name, and push them into a new array. I know that there is `:not() Selector` and `.not() method` to help me with this.
But the big problem is that I can't use `:not() Selector` with `$(this)` and tried using `.not() method` but couldn't get anywhere.
Here is my code:
```js
$(document).ready(function(){
$('#getRows').on('click', function() {
var temp = new Array();
$('#tbl tr').each(function(){
var clsFree = $(this).not(document.getElementsByClassName("testCls"));
temp.push(clsFree);
});
console.log(temp.length);
});
});
```
```html
Get rows without class
| |
| --- |
| Test1 |
| Test2 |
| Test3 |
| Test4 |
| Test5 |
| Test6 |
| Test7 |
| Test8 |
| Test9 |
```
>
> Please note that the main goal here is to find rows that don't have a class name with `testCls` and push them into a new array. Any other method is also appreciated.
>
>
><issue_comment>username_1: two things:
* this is the syntax for the .not: `$(this).not('.testCls');`
* clsFree is going to be a jQuery, and jQuerys still exist even if there are no elements in them. You have to check the length to see if there are any elements.
also as an aside, you might end up being happier with something like this:
`$('#tbl tr:not(.testCls)').each...`
```js
$(document).ready(function() {
$('#getRows').on('click', function() {
var temp = new Array();
$('#tbl tr').each(function() {
clsFree = $(this).not('.testCls');
if (clsFree.length > 0)
temp.push(clsFree);
});
console.log(temp.length);
});
console.log('other method', $('#tbl tr:not(.testCls)').length);
});
```
```html
Get rows without class
| |
| --- |
| Test1 |
| Test2 |
| Test3 |
| Test4 |
| Test5 |
| Test6 |
| Test7 |
| Test8 |
|Test9
```
Upvotes: 1 <issue_comment>username_2: Try `:not()` as part of the selector in `.each` iterator to iterate over only with the selected rows in the selector:
`$('#tbl tr:not(.testCls)').each(function(){`
**Working Code Snippet:**
```js
$(document).ready(function(){
$('#getRows').on('click', function() {
var temp = new Array();
$('#tbl tr:not(.testCls)').each(function(){
var clsFree = this;
temp.push(clsFree);
});
console.log(temp.length);
});
});
```
```html
Get rows without class
| |
| --- |
| Test1 |
| Test2 |
| Test3 |
| Test4 |
| Test5 |
| Test6 |
| Test7 |
| Test8 |
| Test9 |
```
Upvotes: 3 [selected_answer]
|
2018/03/15
| 880 | 2,761 |
<issue_start>username_0: I've never cloned a private GitHub repository. So I followed GitHub's guide but it's still rejecting me. I have a Red Hat linux server on AWS. I did the following:
1. Ran `ssh-keygen -t rsa -b 4096 -C "< my github email address >"`.
2. Ran `eval "$(ssh-agent -s)"`.
3. Ran `ssh-add ~/.ssh/id_rsa`.
4. Ran `cat ~/.ssh/id_rsa.pub` (to get the value of the key).
5. Added output of step 4 into here: <https://github.com/settings/keys>.
6. Ran `ssh -T git@github.com` and it outputted this:
>
> Hi AskYous! You've successfully authenticated, but GitHub does not
> provide shell access.
>
>
>
7. Ran `sudo git clone <EMAIL>.com:AskYous/google-code-challange.git` (a private repository I own). This is when I got the following error:
>
> Cloning into 'google-code-challange'... Permission denied (publickey).
> fatal: Could not read from remote repository.
>
>
> Please make sure you have the correct access rights and the repository
> exists.
>
>
>
8. I checked GitHub and it acknowledges that the key has been used: [](https://i.stack.imgur.com/BimT8.png)<issue_comment>username_1: two things:
* this is the syntax for the .not: `$(this).not('.testCls');`
* clsFree is going to be a jQuery, and jQuerys still exist even if there are no elements in them. You have to check the length to see if there are any elements.
also as an aside, you might end up being happier with something like this:
`$('#tbl tr:not(.testCls)').each...`
```js
$(document).ready(function() {
$('#getRows').on('click', function() {
var temp = new Array();
$('#tbl tr').each(function() {
clsFree = $(this).not('.testCls');
if (clsFree.length > 0)
temp.push(clsFree);
});
console.log(temp.length);
});
console.log('other method', $('#tbl tr:not(.testCls)').length);
});
```
```html
Get rows without class
| |
| --- |
| Test1 |
| Test2 |
| Test3 |
| Test4 |
| Test5 |
| Test6 |
| Test7 |
| Test8 |
|Test9
```
Upvotes: 1 <issue_comment>username_2: Try `:not()` as part of the selector in `.each` iterator to iterate over only with the selected rows in the selector:
`$('#tbl tr:not(.testCls)').each(function(){`
**Working Code Snippet:**
```js
$(document).ready(function(){
$('#getRows').on('click', function() {
var temp = new Array();
$('#tbl tr:not(.testCls)').each(function(){
var clsFree = this;
temp.push(clsFree);
});
console.log(temp.length);
});
});
```
```html
Get rows without class
| |
| --- |
| Test1 |
| Test2 |
| Test3 |
| Test4 |
| Test5 |
| Test6 |
| Test7 |
| Test8 |
| Test9 |
```
Upvotes: 3 [selected_answer]
|
2018/03/15
| 609 | 2,228 |
<issue_start>username_0: **Question**: I'm looking for a way to configure Java to create new files with a particular permission set by default.
**Problem**: I have a Spring Boot app which uses the following:
* Log4J2 for logging
* H2 for flat file databases
* Ehcache for cached entities
All of these libraries create new files on the local file system, and when they do, they produce world-writeable files (666 for files and 777 for directories). I have seen this on macOS 10.13 (user has "umask 0022") and on Amazon Linux (user has "umask 0002").
If I was directly managing the creation of the files, I can do what I need with [PosixFilePermission](https://docs.oracle.com/javase/7/docs/api/java/nio/file/attribute/PosixFilePermission.html), but since file creation is delegated to the libraries, I don't have that opportunity. I could potentially set a timer to discover new files and set the permissions directly, but I'm not wild about that approach.
Log4J2 v2.9 added a filePermissions field to [RollingFileAppender](https://logging.apache.org/log4j/2.x/manual/appenders.html#RollingFileAppender), so I have hope for one of my problems, but I'm not able to find something similar for H2 or Ehcache. Ideally, I'd like to do this at the JVM/Boot level for simplicity and future-proofing.<issue_comment>username_1: Here's [a topic](https://stackoverflow.com/questions/41975808/set-umask-for-tomcat8-via-tomcat-service) of tomcat and umask. Seems tomcat has it's own behavior dealing with umask.
So maybe there is a way to config the 'umask behavior' of tomcat embedded in Spring Boot? Like properties or something.
I cannot pretending this is an Answer. But sadly I don't have enough reputation to comment your question. Hopes this would help you a little.
Upvotes: 1 <issue_comment>username_2: Turns out this is a red herring. The issue is not with java, it's with the YAJSW service wrapper that launches the java process. YAJSW [has several parameters](http://yajsw.sourceforge.net/YAJSW%20Configuration%20Parameters.html) for setting umask, including on the child process, but they are not implemented yet. Launching the app outside of YAJSW produces files that obey the user's umask.
Upvotes: 1 [selected_answer]
|
2018/03/15
| 398 | 1,518 |
<issue_start>username_0: How would I provide pundit authorization for a dashboard controller which provides data from various models?
My DashboardsController looks like this:
```
class DashboardsController < ApplicationController
before_action :authenticate_user!
before_action :set_user
before_action :set_business
after_action :verify_authorized
def index
end
private
def set_user
@user = current_user
end
def set_business
@business = current_user.business
end
end
```
How would I authorize for both @user and @business within my DashboardsPolicy?<issue_comment>username_1: Here's [a topic](https://stackoverflow.com/questions/41975808/set-umask-for-tomcat8-via-tomcat-service) of tomcat and umask. Seems tomcat has it's own behavior dealing with umask.
So maybe there is a way to config the 'umask behavior' of tomcat embedded in Spring Boot? Like properties or something.
I cannot pretending this is an Answer. But sadly I don't have enough reputation to comment your question. Hopes this would help you a little.
Upvotes: 1 <issue_comment>username_2: Turns out this is a red herring. The issue is not with java, it's with the YAJSW service wrapper that launches the java process. YAJSW [has several parameters](http://yajsw.sourceforge.net/YAJSW%20Configuration%20Parameters.html) for setting umask, including on the child process, but they are not implemented yet. Launching the app outside of YAJSW produces files that obey the user's umask.
Upvotes: 1 [selected_answer]
|
2018/03/15
| 1,071 | 3,704 |
<issue_start>username_0: Currently I am attempting to create a dictionary which maps a selected item form a list view with a corresponding string output in a rich text box. I would like to bold specific text in the string, which will always be the same (constant) and also adding dynamic text to the string that would change.
Something like this:
**ID:** 8494903282
Where ID is the constant text I need bolded and the numbers would be a dynamic ID that changes. I will need to have multiple lines with different data in this format which will be changing:
**ID:** 8494903282
**Name:** <NAME>
**Date:** 3/15/2018
Currently I have a rich text box to output to and I am trying to use some string formatting to do what I want but this is not working correctly. Essentially I need a string value I can store in a dictionary so when an item gets selected I can just set the rtf property of the text box to the value of that dictionary item.
Below I have my current format string I am attempting to set the rtf property to:
```
string s1 = string.Format(@"{{\rtf1\ansi \b Commit ID: \b0 {0}\line}}", entry.ID);
string s2 = string.Format(@"{{\b Author: \b0 {0}\line}}", entry.Author);
string s3 = string.Format(@"{{\b Date: \b0 {0}\line}}", entry.Date.ToString("d"));
string s4 = Environment.NewLine + Environment.NewLine + entry.Message;
contents = (s1 + s2 + s3 + s4);
```
Then setting the rtf property of my rich text box:
```
LogContentsTB.Rtf = Logs[LogNamesLV.SelectedItems[0].Name];
```
Where logs is a dictionary of the form < string, string > that holds the format string for the specific item.
However, I get the following output rather than my expected output:
[](https://i.stack.imgur.com/iip5p.png)
This is the correct form of output for the first item but nothing else appears. If there are any other ways to do this I am open to suggestion.<issue_comment>username_1: I think your RTF formatting is wrong. You could try:
```
string s1 = string.Format(@"{{\rtf1\ansi\r\b Commit ID:\b0 {0}\line\r", entry.ID);
string s2 = string.Format(@"\b Author: \b0 {0}\line\r", entry.Author);
string s3 = string.Format(@"\b Date: \b0 {0}\line\r", entry.Date.ToString("d"));
string s4 = Environment.NewLine + Environment.NewLine + entry.Message + "}}";
contents = (s1 + s2 + s3 + s4);
```
Upvotes: 0 <issue_comment>username_2: After doing some light reading on the rtf syntax I noticed that I was trying to close off each string with curly braces. Curly braces are used for RTF groups. For some reason the rich text box in windows forms did not play well with that.
Another thing to notice is that the string.format method was probably the main culprit for cause of issues with this type of formatting. In my answer I do not use it but rather just add the string directly into the rtf formatted string i.e. < format >< variable string >< ending format >
If you look at username_1's response, you will notice he only puts an opening brace on the very first string, s1. This is to group the whole string. But we need to add a closing brace on the final string, s4, to finish the grouping. Below is the final code and screenshot that worked for my application.
```
string s1 = @"{\rtf1\ansi\b ID: \b0 " + entry.ID + @" \line\line";
string s2 = @"\b Author: \b0 " + entry.Author + @" \line\line";
string s3 = @"\b Date: \b0 " + entry.Date.ToString("d") + @" \line\line ";
string s4 = entry.Message + @"}";
contents = s1 + s2 + s3 + s4;
```
[](https://i.stack.imgur.com/XkPeb.png)
Thanks for pointing me in the right direction!
Upvotes: 2 [selected_answer]
|
2018/03/15
| 1,051 | 4,214 |
<issue_start>username_0: I am developing a simple react-native app and am encountering an issue on TouchableHighlight:
When pressing the `TouchableHighlight`, a new screen is displayed (using StackNavigator from `react-navigation`). After pressing the back-button and returning to the original screen, the `TouchableHighlight` still has a black background-color - meaning, that it is still highlighted.
My questions are:
* Is there a way to manually deactivate the highlighting of a `TouchableHighlight`-component? That way I could disable the highlighting after `onPress` has run.
* What could be possible reasons to why the `TouchableHighlight` stays highlighted? I am using it on other parts of my app without navigation, and I could imagine that it has to do with that.
The `TouchableHighlight` exists within a FlatList. The renderItems-method looks like the following:
```
let handlePress = () => {
this.props.navigation.navigate('DetailsScreen');
};
return
Some Text
;
```
If you need/want any further information, please let me know. I've tested the code on android, using the Genymotion-emulator with Marshmallow.
Versions are:
* node -v: 8.9.4
* npm -v: 5.6.0
* react-native-cli: 2.0.1
* react-native: 0.54.2
* react-navigation: 1.5.2
* Build environment: Windows 10 64-bit
At this point, I'm quite certain that the error is somewhere in my code, as `TouchableHighlight` works correctly on other parts of my app, and it propably has to do with the navigation-call, but I was unable to pinpoint, why exactly. I've made sure that there are no exceptions or anything like that in my app, and that the onPress-method therefore finishes successfully.<issue_comment>username_1: You can replace Touchable Highlight with Touchable opacity. It won't highlight the press.
```
return
Some Text
;
```
Upvotes: 0 <issue_comment>username_2: After using the tip from @username_1 and exchanging `TouchableHighlight` with `TouchableOpacity` and back to `TouchableHighlight`, it now works as expected:
Now, after `onPress` has been executed, the button (be it a `TouchableOpacity` or a `TouchableHighlight`) looses its effect.
I am not sure, why it works now. The obvious reason would be, that a recompilation of the source code fixed errors - but I recompiled it for writing the original question multiple times before, so that that cannot be an option. Other users I would suggest to clear any cache possible, and especially do the following steps:
* Close and reopen the android emulator / restart your testing device
* Restart the build PC
* Recompile all source code
* Check in your console for errors and/or exceptions (obviously)
* Replace `TouchableHighlight` with `TouchableOpacity`, recompile, check if the error still exists - and if not, reexchange `TouchableOpacity` to `TouchableHighlight`
Upvotes: 3 [selected_answer]<issue_comment>username_3: For me, i needed to disable the highlight effect after onLongPress has been fired. You can simply change the key of the touchable using a re-render when you want to remove it.
Here's an example:
```
this.setState({ onLongPressed: true })}
onPressOut={() => this.setState({ onLongPressed: false })}
key={`long-pressed-${this.state.onLongPressed ? 'yes' : 'no'}`}>
{rowText}
{rowIcon}
```
Upvotes: 0 <issue_comment>username_4: You can replace Touchable Highlight with Touchable opacity and simply set activeOpactity prop with value 1. It will not highlight the press.
```
....
```
Upvotes: 2 <issue_comment>username_5: Following username_3's answer, there one thing you should also add is
```
useEffect(() => {
if(isLongPressed){
setIsLongPressed(false)
}
}, [isLongPressed])
```
This step is necessary because
when the first onLongPress event is fired it will set isLongPressed to true and thus changing the key the component is re-rendered and is identifies as a new component and previour event listners are discareded so the onPressOut will not be fired. So
when isLongPressed is set to true the component re-renders and then immediatietly we set it's value to false and thus we get the expected behaviour. Otherwise we will get the unexpected behaviour followed by one expected behaviour.
Upvotes: 0
|
2018/03/15
| 822 | 3,153 |
<issue_start>username_0: I am building a software that uses WebSocket with the package NodeJS ws. I organised my networking around a module that is charged of receiving and sending the messages. Because i am using TypeScript, I want to use the type checking which can be pretty handy sometimes. So I constructed two types:
```
// networking.ts
export class Message {
constructor(
public request: string,
public params: Params) {}
}
export interface Params {
messageId?: number;
}
```
This works fine as long as I am only required to use `messageId` as a parameter. But what if I need to also send a nickname as a parameter ? I could add it to the `Params` definition but I don't want the networking engine to bother to know all the different parameters that can be sent (you can imagine that I have actually more than one parameter to send)...
Is there a way to do something like:
```
// profile-params.ts
export interface Params {
nickname:string;
}
// admin-params.ts
export interface Params {
authorizations: string[];
}
```
Such that in the end, the declaration of `Params` can be merged ? I looked at the [official documentation](https://www.typescriptlang.org/docs/handbook/declaration-merging.html) but it can't make it work on different files.
Thanks<issue_comment>username_1: Use generics:
```
export class Message {
constructor(
public request: string,
public params: P
) { }
}
export interface Params {
messageId?: number
}
// profile-params.ts
export interface ProfileParams extends Params {
nickname: string
}
const message = new Message('', { nickname: 'a' })
```
Upvotes: 0 <issue_comment>username_2: I don't fully know the reasons why right now (other than the fact that the moment something is exported it is turned into a module), but the only way I know how to accomplish this is to have them in their own individual files where nothing is exported and import the files (without the `{ Params } from` syntax).
```
// ./message/params.model.ts
interface Params {
messageId?: number;
}
// ./user/params.model.ts
interface Params {
nickname: string;
}
// ./authorization/params.model.ts
interface Params {
authorizations: string[];
}
// ./some.component.ts
import './message/params.model.ts';
import './user/params.model.ts';
import './authorization/params.model.ts';
const params: Params = {
authorizations: ['<PASSWORD>', '<PASSWORD>'],
nickname: 'ninja',
messageId: 1
};
```
Upvotes: 0 <issue_comment>username_3: @username_2 is ok, if your interface does not come from a module and you are ok with putting it in the global scope.
If you want the interface to be part of a module you could use [module augmentation](https://www.typescriptlang.org/docs/handbook/declaration-merging.html) to extend the interface:
```
//pModule.ts
export interface Params {
messageId: number;
}
//pAug.ts
import { Params } from "./pModule";
declare module './pModule' {
interface Params {
nickname: string;
}
}
//usage.ts
import {Params} from './pModule'
import './pAug'
let p : Params = {
nickname: '',
messageId: 0
}
```
Upvotes: 2
|
2018/03/15
| 523 | 1,959 |
<issue_start>username_0: Let's consider the following interface implementation:
```
Comparator stringComparator = (o1, o2) -> 0;
```
Does it violate the Liskov Substitution Principle?<issue_comment>username_1: In this specific case, no. Your example `Comparator` is simply saying that all strings are equal. This satisfies the contract for a `Comparator` (see the [javadoc](https://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html)), and more broadly it satisfies the Liskov Substitutability Principle (LSP).
(This `Comparator` is not useful1, and may well be a mistake / bug, but LSP is not violated.)
In the more general case, a `Comparator` implementation that doesn't satisfy the `Comparator` contract technically violates the LSP. But you could also just say that it is broken. You fix it *primarily* because it is broken / won't work properly ... not because of some design principle.
More generally still, not all examples of LSP violations are broken code. One example is `IdentityHashMap` which (deliberately and usefully) violates the contract for `Map` by not using `equals()` when testing if keys are the same. (It uses `==` instead. Correctly, given the purpose of the class.)
LSP violations can lead to unexpected behavior, but not all surprises are bugs.
---
1 - I can't think of a case where it would be sensible to use this comparator. Not a `TreeMap`. Not sorting ...
Upvotes: 2 [selected_answer]<issue_comment>username_2: This comparator implements a completely valid total order.
Just check the axioms:
* For all `a`, `b`: if `a <= b` and `b <= a` then `a = b` is trivial (antecedents do not matter, consequence always true)
* Transitivity is trivial, because `a <= b` for all `a`, `b`
* Totality is trivial: for all `a`, `b` it holds both `a <= b` AND `b <= a`.
So, even if it had anything to do with Liskov's substitution principle, it's still just a perfectly valid implementation of a `Comparator`.
Upvotes: 0
|
2018/03/15
| 1,389 | 4,583 |
<issue_start>username_0: I am trying to visualize russians regions. I got data from [here](http://global.mapit.mysociety.org), validate [here](http://geojson.io/) and all was well - [picture](https://i.stack.imgur.com/9ev9f.jpg).
But when I try to draw it, I receive only one big black rectangle.
```
var width = 700, height = 400;
var svg = d3.select(".graph").append("svg")
.attr("viewBox", "0 0 " + (width) + " " + (height))
.style("max-width", "700px")
.style("margin", "10px auto");
d3.json("83.json", function (error, mapData) {
var features = mapData.features;
var path = d3.geoPath().projection(d3.geoMercator());
svg.append("g")
.attr("class", "region")
.selectAll("path")
.data(features)
.enter()
.append("path")
.attr("d", path)
});
```
Example - <http://ustnv.ru/d3/index.html>
Geojson file - <http://ustnv.ru/d3/83.json><issue_comment>username_1: The issue is the winding order of the coordinates (see [this block](https://bl.ocks.org/mbostock/a7bdfeb041e850799a8d3dce4d8c50c8)). Most tools/utilities/libraries/validators don't really care about winding order because they treat geoJSON as containing Cartesian coordinates. Not so with D3 - D3 uses ellipsoidal math - benefits of this is include being able to cross the antimeridian easily and being able to select an inverted polygon.
The consequence of using ellipsoidal coordinates is the wrong winding order will create a feature of everything on the planet that is not your target (inverted polygon). Your polygons actually contain a combination of both winding orders. You can see this by inspecting the svg paths:
[](https://i.stack.imgur.com/Ctd2k.png)
Here one path appears to be accurately drawn, while another path on top of it covers the entire planet - except for the portion it is supposed to (the space it is supposed to occupy covered by other paths that cover the whole world).
This can be simple to fix - you just need to reorder the coordinates - but as you have features that contain both windings in the same collection, it'll be easier to use a library such as [turf.js](http://turfjs.org/docs#rewind) to create a new array of properly wound features:
```
var fixed = features.map(function(feature) {
return turf.rewind(feature,{reverse:true});
})
```
Note the reverse winding order - through an odd quirk, D3, which is probably the most widespread platform where winding order matters actually doesn't follow the geoJSON spec (RFC 7946) on winding order, it uses the opposite winding order, see this comment by <NAME>:
>
> I’m disappointed that RFC 7946 standardizes the opposite winding order
> to D3, Shapefiles and PostGIS. And I don’t see an easy way for D3 to
> change its behavior, since it would break all existing (spherical)
> GeoJSON used by D3. ([source](https://github.com/d3/d3-geo/pull/79#issuecomment-280935242))
>
>
>
By rewinding each polygon we get a slightly more useful map:
[](https://i.stack.imgur.com/VgIMl.png)
An improvement, but the features are a bit small with these projection settings.
By adding a fitSize method to scale and translate we get a much better looking map (see [block here](https://bl.ocks.org/Andrew-Reid/7c1137fd4d3d6d6c6ff60d8558a0b6c5)):
[](https://i.stack.imgur.com/IZ3m1.png)
Upvotes: 4 [selected_answer]<issue_comment>username_2: Here's a quick fix to your problem, projection needs a little tuning, also path has `fill:#000` by default and `stroke: #FFF` could make it more legible.
```
var width = 700, height = 400;
var svg = d3.select(".graph").append("svg")
.attr("viewBox", "0 0 " + (width) + " " + (height))
.style("max-width", "700px")
.style("margin", "10px auto");
d3.json("mercator_files/83.json", function (error, mapData) {
var features = mapData.features;
var center = d3.geoCentroid(mapData);
//arbitrary
var scale = 7000;
var offset = [width/2, height/2];
var projection = d3.geoMercator().scale(scale).center(center)
.translate(offset);
var path = d3.geoPath().projection(projection);
svg.append("g")
.attr("class", "region")
.selectAll("path")
.data(features)
.enter()
.append("path")
.attr("d", path)
});
```
Upvotes: -1
|
2018/03/15
| 958 | 3,134 |
<issue_start>username_0: I'm running the below block of code on my computer.
```
/* Example code for Think OS.
Copyright 2014 <NAME>
License: GNU GPLv3
*/
#include
#include
int var1;
int main ()
{
int var2 = 5;
void \*p = malloc(128);
void \*p2 = malloc(128);
char \*s = "Hello, World";
printf ("Address of main is %p\n", main);
printf ("Address of var1 is %p\n", &var1);
printf ("Address of var2 is %p\n", &var2);
printf ("p points to %p\n", p);
printf ("p2 points to %p\n", p2);
printf ("s points to %p\n", s);
return 0;
}
```
Here is a table of the resulting addresses:
```
| Stack | var2 | 0x7fff58cf28d8 |
| Heap | p2 | 0x7fb4c2d032d0 |
| Heap | p | 0x7fb4c2d03250 |
| Globals | var1 | 0x106f0e020 |
| Constants | "Hello, World" | 0x106f0df34 |
| Code | main | 0x106f0de30 |
```
My mental model is that the heap should start right after the Globals section and grow up in address space. It grows up, but I don't understand the gap.
**Why is there such a big gap between `var1` and `p`?**
gdb doesn't have a great tool to visualize the entire address space (so if you have tools that could help I'm open to suggestions).<issue_comment>username_1: I messed up my copy and paste.
Running the code again I get something like this:
```
Address of main is 0x4005d6
Address of var1 is 0x60104c
Address of var2 is 0x7fffd328785c
p points to 0x1d1f010
p2 points to 0x1d1f0a0
s points to 0x400744
```
which immediately made me realize my mistake. In the table displayed in my question I had copied the address of the *variable* `p`, not what `p` *points* to.
Edit: still open to tools for viewing memory. Something similar to [this](https://stackoverflow.com/questions/19748866/how-to-print-memory-in-0xb0987654-using-lldb) Xcode tool, but preferably a CLI or something that can attach to C processes.
Upvotes: 0 <issue_comment>username_2: You do not specify your operating system. However, there are multiple forces at work here.
* Linker
* Loader
* Operating System
The linker determines the layout of the data defined by the program. That includes, all static data, code, and global variables.
The loader positions the segments defined by the linker in memory.
The combination of linker and loader then give the address of your main, var1, and data referenced by s. Traditionally, the loader tries to place the application as as low in the address space as possible. However, it is becoming common for loads to place code at random locations.
The loader usually allocates the initial stack for the process as well. That gives the location of var2 and s. Stacks generally grow downwards so the loader tries to place them as far away from the program data as possible to keep them from colliding.
The operating system allocates dynamic memory to process. Heap is just memory. You can have multiple heaps and heaps split over different locations. It just so happens when your heap manager allocates pages from the operating system, it is getting them at high locations in memory.
Upvotes: 2 [selected_answer]
|
2018/03/15
| 335 | 1,337 |
<issue_start>username_0: I am trying to build a fitness app and I would like to build the user’s profile within the app but I would like to make it as quick and easy as possible for the user. I am able to extract the users stats such as height, body mass, birth date etc from HealthKit but is there a way for me to extract the user’s full name since the ios health app has this on it’s profile page.
I have sifted through the documentation, but so far I haven’t found anything.<issue_comment>username_1: no there is no api AFAIC.
you can ask for addressbook access and read the 'me' contact
Upvotes: 2 [selected_answer]<issue_comment>username_2: Well you could go through all the contacts in the AddressBook and see if any of them are marked with the owner flag.
Just be aware that doing this will popup the "this app wants access to the address book" message. Also Apple isn't very keen on these kind of things. In the app review guide it is specified that an app can not use personal information without the user's permission.
You will get the username like this:
```
var username = NSUserName()!
```
and you can use same user name for your app.
Upvotes: 0 <issue_comment>username_3: Yeah, I don’t want to meddle with the address book and freak the user out. I’ll probably just ask them to enter their name instead.
Upvotes: 0
|
2018/03/15
| 739 | 2,897 |
<issue_start>username_0: I am trying to provide memory wrappers on CentOS and using clang compiler/linker. I wrote wrappers for the allocation functions (malloc et al) and rerouted the calls using -Wl,-wrap,malloc.
This all works fine and I can see it in action.
`void* mem = malloc(10); // routes to __wrap_malloc
free(mem);// routes to __wrap_free`
However, the problem I am seeing is that any memory allocated within libc is not being routed to my wrapper but the application is making the free call that gets intercepted (and crash as a result). For example,
`char* newStr = strdup("foo"); // The internal malloc in libcdoes not come to wrapper
free(newStr); // The free call makes it to the wrapper`
My program is in C++. I created a mallocimpl.cpp and did something like
`extern "C"{
void* __wrap_malloc(size_t size)
{
// Route memory via custom memory allocator
}
//Similarly, __wrap_calloc, __wrap_realloc, __wrap_memalign and __wrap_free`
Any ideas what I am doing wrong? Do I need any special compiler/linker flags?
Thanks in advance.<issue_comment>username_1: The only way to guarantee matching `malloc` / `free` would be to recompile everything, including libc. This is probably not practical for you, so your best option is to somehow track the memory allocated by your `malloc` wrapper and then appropriately either deallocate it yourself or invoke `__real_free` in your `free` wrapper based on how the memory was allocated.
Upvotes: 0 <issue_comment>username_2: There were some special "hooks" in glibc (`__malloc_hook`, `__realloc_hook`, `__free_hook`, `__memalign_hook`) to catch all mallocs of glibc: <https://www.gnu.org/software/libc/manual/html_node/Hooks-for-Malloc.html>
>
> The GNU C Library lets you modify the behavior of malloc, realloc, and free by specifying appropriate hook functions. You can use these hooks to help you debug programs that use dynamic memory allocation, for example.
>
>
>
Hooks are unsafe and marked as deprecated in the man pages. Some variants are listed at [An alternative for the deprecated \_\_malloc\_hook functionality of glibc](https://stackoverflow.com/questions/17803456/)
Also, check how alternative mallocs like `jemalloc`, `tcmalloc` and other implements their "preloading/linking of special library" to replace glibc malloc.
Upvotes: 1 <issue_comment>username_3: The recommended way to replace the glibc `malloc` implementation is ELF symbol interposition:
* [Replacing `malloc`](https://www.gnu.org/software/libc/manual/html_node/Replacing-malloc.html)
This way, you do not have to recompile everything, including glibc, and your `malloc` replacement will still be called once glibc removes the `malloc` hooks.
The `__wrap` approach does not work without recompiling (or at least rewriting) everything because all other libraries (including glibc) will use non-wrapped symbols.
Upvotes: 4 [selected_answer]
|
2018/03/15
| 607 | 2,295 |
<issue_start>username_0: If you have an `IEnumerable>`, say from
```
var ans = src.GroupBy(s => s.Field);
```
Is there a better way (rarely used LINQ method/variation) to convert to an `IEnumerable>` then nested `Select`s like so:
```
var ans2 = ans.Select(g => g.Select(s => s));
```
For those that aren't understanding,
```
var anst = ans2.Select(g => g.GetType().ReflectedType).All(t => t == typeof(Enumerable));
```
should return `anst == true`.<issue_comment>username_1: The only way to guarantee matching `malloc` / `free` would be to recompile everything, including libc. This is probably not practical for you, so your best option is to somehow track the memory allocated by your `malloc` wrapper and then appropriately either deallocate it yourself or invoke `__real_free` in your `free` wrapper based on how the memory was allocated.
Upvotes: 0 <issue_comment>username_2: There were some special "hooks" in glibc (`__malloc_hook`, `__realloc_hook`, `__free_hook`, `__memalign_hook`) to catch all mallocs of glibc: <https://www.gnu.org/software/libc/manual/html_node/Hooks-for-Malloc.html>
>
> The GNU C Library lets you modify the behavior of malloc, realloc, and free by specifying appropriate hook functions. You can use these hooks to help you debug programs that use dynamic memory allocation, for example.
>
>
>
Hooks are unsafe and marked as deprecated in the man pages. Some variants are listed at [An alternative for the deprecated \_\_malloc\_hook functionality of glibc](https://stackoverflow.com/questions/17803456/)
Also, check how alternative mallocs like `jemalloc`, `tcmalloc` and other implements their "preloading/linking of special library" to replace glibc malloc.
Upvotes: 1 <issue_comment>username_3: The recommended way to replace the glibc `malloc` implementation is ELF symbol interposition:
* [Replacing `malloc`](https://www.gnu.org/software/libc/manual/html_node/Replacing-malloc.html)
This way, you do not have to recompile everything, including glibc, and your `malloc` replacement will still be called once glibc removes the `malloc` hooks.
The `__wrap` approach does not work without recompiling (or at least rewriting) everything because all other libraries (including glibc) will use non-wrapped symbols.
Upvotes: 4 [selected_answer]
|
2018/03/15
| 629 | 1,804 |
<issue_start>username_0: For a camera movement in three.js I need to calculate point `C` so to move the camera from point `A` to a certain distance `dist` to point `B`.
<issue_comment>username_1: So you need to calculate the coordinates of point `C`, given that it lies on the line between `B` and `A` at the given distance from `B`? This is pretty straightforward using the following steps:
* Calculate the vector from `B` to `A` (this will just be `A - B`).
* Normalize that vector (make it's length 1).
* Multiply that by the distance you want.
* Add that vector to the point `B`.
So a short javascript example:
```js
const A = [2, 1, 4];
const B = [9, 4, 2];
const dist = 3;
function addVectors(v1, v2) {
return v1.map((x, i) => x + v2[i]);
}
function scalarMultiply(v, c) {
return v.map(x => x*c);
}
function getLength(v) {
return Math.hypot(...v);
}
const vecB2A = addVectors(A, scalarMultiply(B, -1)); // Step 1
const normB2A = scalarMultiply(vecB2A, 1/getLength(vecB2A)); // Step 2
const distB2A = scalarMultiply(normB2A, dist); // Step 3
const C = addVectors(B, distB2A); // Final step
console.log(C);
```
Upvotes: 1 <issue_comment>username_2: Point C is equal to point B minus 'dist' times a unit vector whose direction is AB. So it is quite easy:
vector v from A to B is equal (xB-xA, yB-yA, zB-zA) / distance(AB)
Then C = B - d\*v where d is the distance from B you want C to be.
Upvotes: 0 <issue_comment>username_3: three.js has methods to do that very easily.
Assuming `a`, `b`, and `c` are instances of `THREE.Vector3()`,
```
a.set( 2, 1, 4 );
b.set( 9, 4, 2 );
c.subVectors( a, b ).setLength( dist ).add( b );
```
three.js r.91
Upvotes: 4 [selected_answer]
|
2018/03/15
| 1,018 | 2,865 |
<issue_start>username_0: I am trying to do update on xml file based on condition using AWK Script. Could anyone assist me on this?
**students.xml**
```
1
A
75
2
B
35
1
C
94
```
**Code I tried so far**
I am able to extract tag values using below code
```
BEGIN { RS="<[^>]+>" }
{ print RT, $0 }
```
This prints all the tag and values as expected.
I want to update the tag as **pass** if **marks > 40** else **fail**
**Output**
```
1
A
75
pass
2
B
35
fail
1
C
94
pass
```
Could any one assist me on this?<issue_comment>username_1: Don't try to parse XML with [awk](/questions/tagged/awk "show questions tagged 'awk'"), instead use a real parser :
[](https://i.stack.imgur.com/TZEGp.png) the XML file is edited *on the fly*!
With [perl](/questions/tagged/perl "show questions tagged 'perl'") :
```
#!/usr/bin/env perl
# edit file.xml file in place
use strict; use warnings;
use XML::LibXML;
my $xl = XML::LibXML->new();
my $xml = $xl->load_xml(location => '/tmp/file.xml') ;
for my $node ($xml->findnodes('//student/result')) {
my $mark = $node->findnodes('../mark/text()')->string_value;
$node->removeChildNodes();
if ($mark > 40) {
$node->appendText('pass');
}
else {
$node->appendText('fail');
}
}
$xml->toFile('/tmp/file.xml');
```
Modified file :
---------------
```
xml version="1.0"?
1
A
75
pass
2
B
35
fail
1
C
94
pass
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I would also recommend to avoid a `XML parser/processor` approach here:
If you don't like `perl` you can use a full XML technology approach by using `XSLT`:
**INPUT:**
```
$ more students.xml
::::::::::::::
students.xml
::::::::::::::
1
A
75
2
B
35
1
C
94
```
**XSLT stylesheet:**
```
pass
fail
```
**COMMAND:**
```
$ xsltproc --output students_grade.xml students.xsl students.xml
```
**OUTPUT:**
```
more students_grade.xml
xml version="1.0" encoding="UTF-8"?
1
A
75
pass
2
B
35
fail
1
C
94
pass
```
Upvotes: 2 <issue_comment>username_3: Another option is to use the [`ed` (edit) command](http://xmlstar.sourceforge.net/doc/UG/xmlstarlet-ug.html#idm47077139594320) of [xmlstarlet](http://xmlstar.sourceforge.net/doc/UG/xmlstarlet-ug.html)...
```
xmlstarlet ed -L -u "//student[mark >= 40]/result" -v "pass" -u "//student[40 > mark]/result" -v "fail" students.xml
```
***CAUTION:*** The `-L` in the command line will edit the file inplace. Be sure to remove it if that is not the behavior you want.
You can also use XSLT 1.0 with xmlstartlet (the [`tr` (transform) command](http://xmlstar.sourceforge.net/doc/UG/xmlstarlet-ug.html#idm47077139602800))...
**update.xsl**
```
pass
fail
```
**command line**
```
xmlstarlet tr update.xsl students.xml
```
Upvotes: 3
|
2018/03/15
| 799 | 2,094 |
<issue_start>username_0: I am trying to use `mutate` to create a new column with values based on a specific column.
Example final data frame (I am trying to create `new_col`):
```
x = tibble(colA = c(11, 12, 13),
colB = c(91, 92, 93),
col_to_use = c("colA", "colA", "colB"),
new_col = c(11, 12, 93))
```
I would like to do something like:
```
x %>% mutate(new_col = col_to_use)
```
Except instead of column contents, I would like to transform them to a variable. I started with:
```
col_name = "colA"
x %>% mutate(new_col = !!as.name(col_name))
```
That works with a static variable. However, I have been unable to change the variable to represent the column. How do I take a column name based on contents of a different column?
This question is basically the opposite of this: [dplyr - mutate: use dynamic variable names](https://stackoverflow.com/questions/26003574/dplyr-mutate-use-dynamic-variable-names). I wasn't able to adapt the solution to my problem.<issue_comment>username_1: I'm not sure how to do it with `tidyverse` idioms alone (though I assume there's a way). But here's a method using `apply`:
```
x$new_col = apply(x, 1, function(d) {
d[match(d["col_to_use"], names(x))]
})
```
>
>
> ```
> colA colB col_to_use new_col
> 1 11 91 colA 11
> 2 12 92 colA 12
> 3 13 93 colB 93
>
> ```
>
>
Or, putting the `apply` inside `mutate`:
```
x = x %>%
mutate(new_col = apply(x, 1, function(d) {
d[match(d["col_to_use"], names(x))]
}))
```
Upvotes: 2 <issue_comment>username_2: We can use `imap_dbl` and `pluck` from the [purrr](/questions/tagged/purrr "show questions tagged 'purrr'") package to achieve this task.
```
library(tidyverse)
x <- tibble(colA = c(11, 12, 13),
colB = c(91, 92, 93),
col_to_use = c("colA", "colA", "colB"))
x2 <- x %>%
mutate(new_col = imap_dbl(col_to_use, ~pluck(x, .x, .y)))
x2
# # A tibble: 3 x 4
# colA colB col_to_use new_col
#
# 1 11. 91. colA 11.
# 2 12. 92. colA 12.
# 3 13. 93. colB 93.
```
Upvotes: 3
|
2018/03/15
| 880 | 2,546 |
<issue_start>username_0: My current query:
```
select users.id as user_id, opportunities.id as op_id, opportunities.title, certificates.id as cert_id from opportunities
join opportunity_certificates on opportunities.id=opportunity_certificates.opportunity_id
join certificates on opportunity_certificates.certificate_id=certificates.id
join user_certificates on certificates.id=user_certificates.certificate_id
join users on user_certificates.user_id=users.id
where opportunity_certificates.is_required = 1 and
opportunities.id = 1
```
This produces the table on the picture below.
cert\_id column can have values from 1 to 7, depends on the `opportunities.id`. In the table below, I want the query to return only the rows which have the same user\_id but different cert\_id, 1 and 2.
If the table had 3 different cert\_id, I would want it to return only the rows which have same user\_id but different cert\_id, 1,2 and 3.
when the cert\_id has only one value, query should return all the records with that one value in cert\_id. Basically, it should show all users who have all required certificates.
The query has to be in the current format. I experimented with
```
group by users.id
having count(*) >
```
but I don't know how to make that comparison dynamic, relative to the count of distinctive values in the `cert_id` column.
[](https://i.stack.imgur.com/EMHC0.png)<issue_comment>username_1: I'm not sure how to do it with `tidyverse` idioms alone (though I assume there's a way). But here's a method using `apply`:
```
x$new_col = apply(x, 1, function(d) {
d[match(d["col_to_use"], names(x))]
})
```
>
>
> ```
> colA colB col_to_use new_col
> 1 11 91 colA 11
> 2 12 92 colA 12
> 3 13 93 colB 93
>
> ```
>
>
Or, putting the `apply` inside `mutate`:
```
x = x %>%
mutate(new_col = apply(x, 1, function(d) {
d[match(d["col_to_use"], names(x))]
}))
```
Upvotes: 2 <issue_comment>username_2: We can use `imap_dbl` and `pluck` from the [purrr](/questions/tagged/purrr "show questions tagged 'purrr'") package to achieve this task.
```
library(tidyverse)
x <- tibble(colA = c(11, 12, 13),
colB = c(91, 92, 93),
col_to_use = c("colA", "colA", "colB"))
x2 <- x %>%
mutate(new_col = imap_dbl(col_to_use, ~pluck(x, .x, .y)))
x2
# # A tibble: 3 x 4
# colA colB col_to_use new_col
#
# 1 11. 91. colA 11.
# 2 12. 92. colA 12.
# 3 13. 93. colB 93.
```
Upvotes: 3
|
2018/03/15
| 1,582 | 4,191 |
<issue_start>username_0: I am trying to work with a data set that requires significant cleaning. I have one subject name that I cannot seem to remove the leading white space from.
Example data:
```
Data <- dput(Data)
structure(list(Teacher = structure(c(1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L
), .Label = c("Please.rate.teacher:.JOHN.DOE .Overall.rating.for.teacher",
"Please.rate.teacher: Jane.Doe.Overall.rating.for.teacher"), class = "factor"),
Overall_Rating = c(5L, 4L, 5L, 4L, 4L, 5L, 4L, 4L, 4L, 4L,
3L, 5L, 4L, 4L, 3L, 4L, 4L, 4L, 4L, 4L, 4L, 3L)), .Names = c("Teacher",
"Overall_Rating"), class = "data.frame", row.names = c(NA, -22L
))
```
My attempt at cleaning:
```
Data_clean <- Data %>%
mutate(Teacher = as.character(Teacher),
Teacher = gsub("Please.rate.teacher|.Overall.rating.for.teacher|[:]", "", Teacher),
Teacher = gsub("[.]", " ", Teacher),
Teacher = trimws(Teacher),
Teacher = tolower(Teacher), Teacher = tools::toTitleCase(Teacher))
```
Results in remaining leading and trailing white space, which also breaks the title case for the second name:
```
unique(Data_clean$Teacher)
[1] "<NAME> " " <NAME>"
```
The first name still has trailing white space and the second has leading white space.
How can I remove that?<issue_comment>username_1: What about this?
```
Data_clean <- Data %>%
mutate(Teacher = gsub("Please.rate.teacher|\\s*\\.Overall.rating.for.teacher|:", "", Teacher),
Teacher = gsub("\\.", " ", Teacher),
Teacher = trimws(Teacher),
Teacher = tolower(Teacher), Teacher = tools::toTitleCase(Teacher))
unique(Data_clean$Teacher);
#[1] "<NAME>" "<NAME>"
```
Explanation: Replace optional (`>=0`) whitespaces occurring before `".Overall.rating..."` in `Teacher`.
---
Sample data
-----------
```
Data <- structure(list(Teacher = structure(c(1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L
), .Label = c("Please.rate.teacher:.JOHN.DOE .Overall.rating.for.teacher",
"Please.rate.teacher: Jane.Doe.Overall.rating.for.teacher"), class = "factor"),
Overall_Rating = c(5L, 4L, 5L, 4L, 4L, 5L, 4L, 4L, 4L, 4L,
3L, 5L, 4L, 4L, 3L, 4L, 4L, 4L, 4L, 4L, 4L, 3L)), .Names = c("Teacher",
"Overall_Rating"), class = "data.frame", row.names = c(NA, -22L
))
```
Upvotes: 0 <issue_comment>username_2: Here is a completely reproducible example with `stringr` and `str_trim` in particular since I don't know why `trimws` isn't working for you. Your posted code gave me the same output, correctly changing the case to title and removing the spaces.
```r
data <- structure(list(Teacher = structure(c(1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L
), .Label = c("Please.rate.teacher:.JOHN.DOE .Overall.rating.for.teacher",
"Please.rate.teacher: Jane.Doe.Overall.rating.for.teacher"), class = "factor"),
Overall_Rating = c(5L, 4L, 5L, 4L, 4L, 5L, 4L, 4L, 4L, 4L,
3L, 5L, 4L, 4L, 3L, 4L, 4L, 4L, 4L, 4L, 4L, 3L)), .Names = c("Teacher",
"Overall_Rating"), class = "data.frame", row.names = c(NA, -22L
))
library(tidyverse)
data %>%
mutate(
Teacher = Teacher %>%
str_remove_all("Please.rate.teacher:|.Overall.rating.for.teacher") %>%
str_replace_all("\\.", " ") %>%
str_trim() %>%
str_to_title()
) %>%
`[[`(1) %>%
unique()
#> [1] "<NAME>" "<NAME>"
```
Created on 2018-03-15 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0).
Upvotes: 2 [selected_answer]<issue_comment>username_3: I suspect your data contains a non-ASCII space like `"\u00A0"`. The `trimws` function will only remove ASCII space characters.
Try running `utf8::utf8_print(unique(Data_clean$Teacher), utf8 = FALSE)` to see if this is the case.
To handle non-ASCII spaces, replace `trimws(x)` in your code with
```
gsub("(^[[:space:]]*)|([[:space:]]*$)", "", x)
```
Upvotes: 2
|
2018/03/15
| 909 | 3,350 |
<issue_start>username_0: I have tried to implement spring external configurations using Config Server. It is working fine for the very first time when the application is started but any changes to the properties file are not being reflected. I tried to use /refresh endpoint to refresh my properties on the fly but it doesn't seem to be working. Any help on this would be greatly helpful.
I tried POSTing to localhost:8080/refresh but getting a 404 Error response.
Below is the code of my application class
```
@SpringBootApplication
public class Config1Application {
public static void main(String[] args) {
SpringApplication.run(Config1Application.class, args);
}
}
@RestController
@RefreshScope
class MessageRestController {
@Value("${message:Hello default}")
private String message;
@RequestMapping("/message")
String getMessage() {
return this.message;
}
}
```
and POM file is
```
org.springframework.boot
spring-boot-starter-parent
2.0.0.RELEASE
UTF-8
UTF-8
1.8
Finchley.M8
org.springframework.cloud
spring-cloud-starter-config
org.springframework.boot
spring-boot-starter-actuator
compile
org.springframework.boot
spring-boot-starter-web
org.springframework.boot
spring-boot-starter-test
test
org.springframework.cloud
spring-cloud-dependencies
${spring-cloud.version}
pom
import
org.springframework.boot
spring-boot-maven-plugin
spring-milestones
Spring Milestones
https://repo.spring.io/milestone
false
```
and bootstrap.properties
```
spring.application.name=xxx
spring.cloud.config.uri=https://xxxxxx.com
management.security.enabled=false
endpoints.actuator.enabled=true
```<issue_comment>username_1: The endpoint is now `/actuator/refresh` for Spring 2 and greater
**From the comments:**
* You do need to have the `management.endpoints.web.exposure.include=refresh` set in the `bootstrap.properties/bootstrap.yml`
**Note:** If you're new to `Spring-Cloud` and not quite sure of what all keywords can go in `web.exposure` you can set it to `*` (`management.endpoints.web.exposure.include=*`) to have all exposed and you can get to know the endpoints and their restrictions later.
Upvotes: 6 [selected_answer]<issue_comment>username_2: It worked for me after adding the property
```
management.endpoints.web.exposure.include=*
```
In bootstrap.properties and changing the url to `/actuator/refresh` for spring version above 2.0.0
For spring version 1.0.5 url is `/refresh`.
Upvotes: 2 <issue_comment>username_3: If you have issues with accepting formurlencoded in SPRING 2.0>, use :
curl -H "Content-Type: application/json" -d {} <http://localhost:port/actuator/refresh>
instead of:
curl -d {} <http://localhost:port/refresh>
which was accepted in SPRING 1.\*
Upvotes: -1 <issue_comment>username_4: For **YAML** files, the property's value needs to be wrapped inside double quotes :
```
# Spring Boot Actuator
management:
endpoints:
web:
exposure:
include: "*"
```
Note : Ensure you use the right *endpoints* keyword (with 's') for this property as long as it exists for another property without 's' : "management.endpoint.health.... " .
Upvotes: 2 <issue_comment>username_5: Note: - It's a POST request (not GET)
I've posted the solution here
[https://stackoverflow.com/a/74465108/2171938][1]
Upvotes: 0
|
2018/03/15
| 571 | 1,953 |
<issue_start>username_0: Is there any dotnet core version of DocuSign.eSign Nuget package? or even source code I can fork?
[DocuSign.eSign Nuget pacakge](https://www.nuget.org/packages/DocuSign.eSign.dll/)
[Source code for C# client](https://github.com/docusign/docusign-csharp-client)<issue_comment>username_1: The endpoint is now `/actuator/refresh` for Spring 2 and greater
**From the comments:**
* You do need to have the `management.endpoints.web.exposure.include=refresh` set in the `bootstrap.properties/bootstrap.yml`
**Note:** If you're new to `Spring-Cloud` and not quite sure of what all keywords can go in `web.exposure` you can set it to `*` (`management.endpoints.web.exposure.include=*`) to have all exposed and you can get to know the endpoints and their restrictions later.
Upvotes: 6 [selected_answer]<issue_comment>username_2: It worked for me after adding the property
```
management.endpoints.web.exposure.include=*
```
In bootstrap.properties and changing the url to `/actuator/refresh` for spring version above 2.0.0
For spring version 1.0.5 url is `/refresh`.
Upvotes: 2 <issue_comment>username_3: If you have issues with accepting formurlencoded in SPRING 2.0>, use :
curl -H "Content-Type: application/json" -d {} <http://localhost:port/actuator/refresh>
instead of:
curl -d {} <http://localhost:port/refresh>
which was accepted in SPRING 1.\*
Upvotes: -1 <issue_comment>username_4: For **YAML** files, the property's value needs to be wrapped inside double quotes :
```
# Spring Boot Actuator
management:
endpoints:
web:
exposure:
include: "*"
```
Note : Ensure you use the right *endpoints* keyword (with 's') for this property as long as it exists for another property without 's' : "management.endpoint.health.... " .
Upvotes: 2 <issue_comment>username_5: Note: - It's a POST request (not GET)
I've posted the solution here
[https://stackoverflow.com/a/74465108/2171938][1]
Upvotes: 0
|
2018/03/15
| 2,359 | 7,993 |
<issue_start>username_0: I have a file called simple\_example.py, which consists of 2 functions:
```
# import the necessary packages
import argparse
class simple:
@staticmethod
def func1():
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-n", "--name", help="name of the user", default='host')
ap.add_argument('-num', '--number', required=True, help='choose a number')
args = vars(ap.parse_args())
# display a friendly message to the user
print("Hi there {}, it's nice to meet you! you chose {}".format(args['name'], args['age']))
@staticmethod
def func2():
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-n", "--name", help="name of the user", default='host')
ap.add_argument('-num', '--number', required=True, help='choose a number')
ap.add_argument("-g", "--greet", help="say greetings", default='hello')
args = vars(ap.parse_args())
# display a friendly message to the user
print("{} there {}, it's nice to meet you! you chose {}".format(args['greet'], args['name'], args['age']))
```
I'd like to be able to call either func1() or func2() from the command line, so, I created another file called pyrun.py from this [link](https://stackoverflow.com/questions/29130994/python-run-function-from-the-command-line/29130994#29130994)
```
# !/usr/bin/env python
# make executable in bash chmod +x PyRun
import sys
import inspect
import importlib
import os
if __name__ == "__main__":
cmd_folder = os.path.realpath(os.path.abspath(os.path.split(inspect.getfile(inspect.currentframe()))[0]))
if cmd_folder not in sys.path:
sys.path.insert(0, cmd_folder)
# get the second argument from the command line
methodname = sys.argv[1]
# split this into module, class and function name
modulename, classname, funcname = methodname.split(".")
# get pointers to the objects based on the string names
themodule = importlib.import_module(modulename)
theclass = getattr(themodule, classname)
thefunc = getattr(theclass, funcname)
# pass all the parameters from the third until the end of what the function needs & ignore the rest
args = inspect.getargspec(thefunc)
print(args)
```
However, args in ArgSpec(args=[], varargs=None, keywords=None, defaults=None) shows an empty list.
1. How can I extract the parameters from either func1 or func2?
2. Is there a better way to run either func1 or func2 from the command line?<issue_comment>username_1: If I put your first block of code in a file, I can import it into a `ipython` session and run your 2 functions:
```
In [2]: import stack49311085 as app
In [3]: app.simple
Out[3]: stack49311085.simple
```
`ipython` tab expansion (which uses some form of `inspect`) shows me that the module has a `simple` class, and the class itself has two static functions.
I can call `func1`, and get an `argparse` error message:
```
In [4]: app.simple.func1()
usage: ipython3 [-h] [-n NAME] -num NUMBER
ipython3: error: the following arguments are required: -num/--number
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
```
Similarly for `func2`:
```
In [7]: app.simple.func2()
usage: ipython3 [-h] [-n NAME] -num NUMBER [-g GREET]
ipython3: error: the following arguments are required: -num/--number
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
```
`parse_args` as a default parses the `sys.argv[1:]` list, which obviouslly is not tailored to its requirements.
```
def foo(argv=None):
parser = ....
....
args = parse.parse_args(argv=argv)
return args
```
is a more useful wrapper. With this I can pass a test `argv` list, and get back the parsed `Namespace`. If I don't give it such a list, it will used the `sys.argv` default. When testing a parser I like to return and/or display the whole `Namespace`.
I haven't used `inspect` enough to try to figure out what you are trying to do with it, or how to correct it. You don't need `inspect` to run code in an imported module like this.
---
I can test your imported parser by modifying the `sys.argv`
```
In [8]: import sys
In [9]: sys.argv
Out[9]:
['/usr/local/bin/ipython3',
'--pylab',
'--nosep',
'--term-title',
'--InteractiveShellApp.pylab_import_all=False']
In [10]: sys.argv[1:] = ['-h']
In [11]: app.simple.func2()
usage: ipython3 [-h] [-n NAME] -num NUMBER [-g GREET]
optional arguments:
-h, --help show this help message and exit
-n NAME, --name NAME name of the user
-num NUMBER, --number NUMBER
choose a number
-g GREET, --greet GREET
say greetings
An exception has occurred, use %tb to see the full traceback.
SystemExit: 0
```
Or following the `help`:
```
In [12]: sys.argv[1:] = ['-num=42', '-nPaul', '-gHI']
In [13]: app.simple.func2()
...
---> 30 print("{} there {}, it's nice to meet you! you chose {}".format(args['greet'], args['name'], args['age']))
KeyError: 'age'
```
Oops, there's an error in your code. You ask for `args['age']`, but didn't define a parser argument with that name. That's part of why I like to print the `args` Namespace` - to make sure it is setting all the attributes that I expect.
---
Normally we don't use different parsers for different inputs. It's possible to do that based on your own test of `sys.avgv[1]`, but keep in mind that that string will still be on `sys.argv[1:]` list that your parser(s) read. Instead write one parser that can handle the various styles of input. The `subparser` mentioned in the other answer is one option. Another is to base your action on the value of the `args.greet` attribute. If not used it will be the default value.
Upvotes: 0 <issue_comment>username_2: You probably want to use [sub-commands](https://docs.python.org/3/library/argparse.html#sub-commands). Here is an implementation of your example using sub-commands.
```
import argparse
def func1(args):
print("Hi there {}, it is nice to meet you! You chose {}.".format(args.name, args.number))
def func2(args):
print("{} there {}, it is nice to meet you! You chose {}.".format(args.greet, args.name, args.number))
#
# The top-level parser
#
parser = argparse.ArgumentParser('top.py', description='An example sub-command implementation')
#
# General sub-command parser object
#
subparsers = parser.add_subparsers(help='sub-command help')
#
# Specific sub-command parsers
#
cmd1_parser = subparsers.add_parser('cmd1', help='The first sub-command')
cmd2_parser = subparsers.add_parser('cmd2', help='The second sub-command')
#
# Assign the execution functions
#
cmd1_parser.set_defaults(func=func1)
cmd2_parser.set_defaults(func=func2)
#
# Add the common options
#
for cmd_parser in [cmd1_parser, cmd2_parser]:
cmd_parser.add_argument('-n', '--name', default='host', help='Name of the user')
cmd_parser.add_argument('-num', '--number', required=True, help='Number to report')
#
# Add command-specific options
#
cmd2_parser.add_argument('-g', '--greet', default='hello', help='Greeting to use')
#
# Parse the arguments
#
args = parser.parse_args()
#
# Invoke the function
#
args.func(args)
```
Example output:
```
$ python ./top.py cmd1 -n Mark -num 3
Hi there Mark, it is nice to meet you! You chose 3.
$ python ./top.py cmd2 -n Bob -num 7 -g Hello
Hello there Bob, it is nice to meet you! You chose 7.
```
And, of course, the help functions work for each of the sub-commands.
```
$ python ./top.py cmd2 -h
usage: top.py cmd2 [-h] [-n NAME] -num NUMBER [-g GREET]
optional arguments:
-h, --help show this help message and exit
-n NAME, --name NAME Name of the user
-num NUMBER, --number NUMBER
Number to report
-g GREET, --greet GREET
Greeting to use
```
Upvotes: 1
|
2018/03/15
| 2,341 | 7,627 |
<issue_start>username_0: I have been trying to test out submitting Pig jobs on AWS EMR following Amazon's [guide](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-pig-launch.html). I made the change to the Pig script to ensure that it can find the piggybank.jar as instructed by Amazon. When I run the script I get an ERROR 1070 indicated that one of functions available in piggybank cannot be resolved. Any ideas on what is going wrong?
**Key part of error**
```
2018-03-15 21:47:08,258 ERROR org.apache.pig.PigServer (main): exception
during parsing: Error during parsing. Could not resolve
org.apache.pig.piggybank.evaluation.string.EXTRACT using imports: [,
java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Failed to parse: Pig script failed to parse: Failed to generate logical plan. Nested exception: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve org.apache.pig.piggybank.evaluation.string.EXTRACT using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
```
**The first part of the script is as follows:**
Line 26 referred to in the error is contains "EXTRACT("
```
register file:/usr/lib/pig/lib/piggybank.jar;
DEFINE EXTRACT org.apache.pig.piggybank.evaluation.string.EXTRACT;
DEFINE FORMAT org.apache.pig.piggybank.evaluation.string.FORMAT;
DEFINE REPLACE org.apache.pig.piggybank.evaluation.string.REPLACE;
DEFINE DATE_TIME org.apache.pig.piggybank.evaluation.datetime.DATE_TIME;
DEFINE FORMAT_DT org.apache.pig.piggybank.evaluation.datetime.FORMAT_DT;
--
-- import logs and break into tuples
--
raw_logs =
-- load the weblogs into a sequence of one element tuples
LOAD '$INPUT' USING TextLoader AS (line:chararray);
logs_base =
-- for each weblog string convert the weblong string into a
-- structure with named fields
FOREACH
raw_logs
GENERATE
FLATTEN (
EXTRACT(
line,
'^(\\S+) (\\S+) (\\S+) \\[([\\w:/]+\\s[+\\-]\\d{4})\\] "(.+?)" (\\S+) (\\S+) "([^"]*)" "([^"]*)"'
)
)
AS (
remoteAddr: chararray, remoteLogname: chararray, user: chararray, time: chararray,
request: chararray, status: int, bytes_string: chararray, referrer: chararray,
browser: chararray
)
;
```<issue_comment>username_1: If I put your first block of code in a file, I can import it into a `ipython` session and run your 2 functions:
```
In [2]: import stack49311085 as app
In [3]: app.simple
Out[3]: stack49311085.simple
```
`ipython` tab expansion (which uses some form of `inspect`) shows me that the module has a `simple` class, and the class itself has two static functions.
I can call `func1`, and get an `argparse` error message:
```
In [4]: app.simple.func1()
usage: ipython3 [-h] [-n NAME] -num NUMBER
ipython3: error: the following arguments are required: -num/--number
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
```
Similarly for `func2`:
```
In [7]: app.simple.func2()
usage: ipython3 [-h] [-n NAME] -num NUMBER [-g GREET]
ipython3: error: the following arguments are required: -num/--number
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
```
`parse_args` as a default parses the `sys.argv[1:]` list, which obviouslly is not tailored to its requirements.
```
def foo(argv=None):
parser = ....
....
args = parse.parse_args(argv=argv)
return args
```
is a more useful wrapper. With this I can pass a test `argv` list, and get back the parsed `Namespace`. If I don't give it such a list, it will used the `sys.argv` default. When testing a parser I like to return and/or display the whole `Namespace`.
I haven't used `inspect` enough to try to figure out what you are trying to do with it, or how to correct it. You don't need `inspect` to run code in an imported module like this.
---
I can test your imported parser by modifying the `sys.argv`
```
In [8]: import sys
In [9]: sys.argv
Out[9]:
['/usr/local/bin/ipython3',
'--pylab',
'--nosep',
'--term-title',
'--InteractiveShellApp.pylab_import_all=False']
In [10]: sys.argv[1:] = ['-h']
In [11]: app.simple.func2()
usage: ipython3 [-h] [-n NAME] -num NUMBER [-g GREET]
optional arguments:
-h, --help show this help message and exit
-n NAME, --name NAME name of the user
-num NUMBER, --number NUMBER
choose a number
-g GREET, --greet GREET
say greetings
An exception has occurred, use %tb to see the full traceback.
SystemExit: 0
```
Or following the `help`:
```
In [12]: sys.argv[1:] = ['-num=42', '-nPaul', '-gHI']
In [13]: app.simple.func2()
...
---> 30 print("{} there {}, it's nice to meet you! you chose {}".format(args['greet'], args['name'], args['age']))
KeyError: 'age'
```
Oops, there's an error in your code. You ask for `args['age']`, but didn't define a parser argument with that name. That's part of why I like to print the `args` Namespace` - to make sure it is setting all the attributes that I expect.
---
Normally we don't use different parsers for different inputs. It's possible to do that based on your own test of `sys.avgv[1]`, but keep in mind that that string will still be on `sys.argv[1:]` list that your parser(s) read. Instead write one parser that can handle the various styles of input. The `subparser` mentioned in the other answer is one option. Another is to base your action on the value of the `args.greet` attribute. If not used it will be the default value.
Upvotes: 0 <issue_comment>username_2: You probably want to use [sub-commands](https://docs.python.org/3/library/argparse.html#sub-commands). Here is an implementation of your example using sub-commands.
```
import argparse
def func1(args):
print("Hi there {}, it is nice to meet you! You chose {}.".format(args.name, args.number))
def func2(args):
print("{} there {}, it is nice to meet you! You chose {}.".format(args.greet, args.name, args.number))
#
# The top-level parser
#
parser = argparse.ArgumentParser('top.py', description='An example sub-command implementation')
#
# General sub-command parser object
#
subparsers = parser.add_subparsers(help='sub-command help')
#
# Specific sub-command parsers
#
cmd1_parser = subparsers.add_parser('cmd1', help='The first sub-command')
cmd2_parser = subparsers.add_parser('cmd2', help='The second sub-command')
#
# Assign the execution functions
#
cmd1_parser.set_defaults(func=func1)
cmd2_parser.set_defaults(func=func2)
#
# Add the common options
#
for cmd_parser in [cmd1_parser, cmd2_parser]:
cmd_parser.add_argument('-n', '--name', default='host', help='Name of the user')
cmd_parser.add_argument('-num', '--number', required=True, help='Number to report')
#
# Add command-specific options
#
cmd2_parser.add_argument('-g', '--greet', default='hello', help='Greeting to use')
#
# Parse the arguments
#
args = parser.parse_args()
#
# Invoke the function
#
args.func(args)
```
Example output:
```
$ python ./top.py cmd1 -n Mark -num 3
Hi there Mark, it is nice to meet you! You chose 3.
$ python ./top.py cmd2 -n Bob -num 7 -g Hello
Hello there Bob, it is nice to meet you! You chose 7.
```
And, of course, the help functions work for each of the sub-commands.
```
$ python ./top.py cmd2 -h
usage: top.py cmd2 [-h] [-n NAME] -num NUMBER [-g GREET]
optional arguments:
-h, --help show this help message and exit
-n NAME, --name NAME Name of the user
-num NUMBER, --number NUMBER
Number to report
-g GREET, --greet GREET
Greeting to use
```
Upvotes: 1
|
2018/03/15
| 2,058 | 6,783 |
<issue_start>username_0: I am writing a function that builds and populates a tree-like data structure, using the `treelib` library. You create a tree as follows:
```
foo = Tree()
```
...and go from there, which is standard enough. Here is what I have, simplified:
```
def make_special_tree(tree, arg1, arg2):
tree = Tree()
other_stuff = things(arg1)
modify_tree(tree, other_stuff, arg2)
return tree
```
Here's the thing: let's say that I ultimately want a tree object called `blah`. If I do the following, the command runs without an error:
```
make_special_tree('blah', foo, bar)
```
...but when I type `blah` afterward, I get back `NameError: name 'blah' is not defined`. If I do the following, the command also runs without an error:
```
blah = make_special_tree('yoink', foo, bar)
```
...and when I type `blah` afterward, I get back , which is what I want to see. `yoink` meanwhile remains undefined, like `blah` in the previous version.
Hence my question--and I can tell this is basic, but I cannot untangle this, in part because I am not sure how precisely to ask the question. As you can see, right now I have to create an instance of class Tree() and I *think* I have to feed my function an argument to do so. I think that `blah = make_special_tree(args)` is the correct way to format this, but how can I pass the variable `blah` as the name of the tree structure I'd like returned?<issue_comment>username_1: If I put your first block of code in a file, I can import it into a `ipython` session and run your 2 functions:
```
In [2]: import stack49311085 as app
In [3]: app.simple
Out[3]: stack49311085.simple
```
`ipython` tab expansion (which uses some form of `inspect`) shows me that the module has a `simple` class, and the class itself has two static functions.
I can call `func1`, and get an `argparse` error message:
```
In [4]: app.simple.func1()
usage: ipython3 [-h] [-n NAME] -num NUMBER
ipython3: error: the following arguments are required: -num/--number
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
```
Similarly for `func2`:
```
In [7]: app.simple.func2()
usage: ipython3 [-h] [-n NAME] -num NUMBER [-g GREET]
ipython3: error: the following arguments are required: -num/--number
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
```
`parse_args` as a default parses the `sys.argv[1:]` list, which obviouslly is not tailored to its requirements.
```
def foo(argv=None):
parser = ....
....
args = parse.parse_args(argv=argv)
return args
```
is a more useful wrapper. With this I can pass a test `argv` list, and get back the parsed `Namespace`. If I don't give it such a list, it will used the `sys.argv` default. When testing a parser I like to return and/or display the whole `Namespace`.
I haven't used `inspect` enough to try to figure out what you are trying to do with it, or how to correct it. You don't need `inspect` to run code in an imported module like this.
---
I can test your imported parser by modifying the `sys.argv`
```
In [8]: import sys
In [9]: sys.argv
Out[9]:
['/usr/local/bin/ipython3',
'--pylab',
'--nosep',
'--term-title',
'--InteractiveShellApp.pylab_import_all=False']
In [10]: sys.argv[1:] = ['-h']
In [11]: app.simple.func2()
usage: ipython3 [-h] [-n NAME] -num NUMBER [-g GREET]
optional arguments:
-h, --help show this help message and exit
-n NAME, --name NAME name of the user
-num NUMBER, --number NUMBER
choose a number
-g GREET, --greet GREET
say greetings
An exception has occurred, use %tb to see the full traceback.
SystemExit: 0
```
Or following the `help`:
```
In [12]: sys.argv[1:] = ['-num=42', '-nPaul', '-gHI']
In [13]: app.simple.func2()
...
---> 30 print("{} there {}, it's nice to meet you! you chose {}".format(args['greet'], args['name'], args['age']))
KeyError: 'age'
```
Oops, there's an error in your code. You ask for `args['age']`, but didn't define a parser argument with that name. That's part of why I like to print the `args` Namespace` - to make sure it is setting all the attributes that I expect.
---
Normally we don't use different parsers for different inputs. It's possible to do that based on your own test of `sys.avgv[1]`, but keep in mind that that string will still be on `sys.argv[1:]` list that your parser(s) read. Instead write one parser that can handle the various styles of input. The `subparser` mentioned in the other answer is one option. Another is to base your action on the value of the `args.greet` attribute. If not used it will be the default value.
Upvotes: 0 <issue_comment>username_2: You probably want to use [sub-commands](https://docs.python.org/3/library/argparse.html#sub-commands). Here is an implementation of your example using sub-commands.
```
import argparse
def func1(args):
print("Hi there {}, it is nice to meet you! You chose {}.".format(args.name, args.number))
def func2(args):
print("{} there {}, it is nice to meet you! You chose {}.".format(args.greet, args.name, args.number))
#
# The top-level parser
#
parser = argparse.ArgumentParser('top.py', description='An example sub-command implementation')
#
# General sub-command parser object
#
subparsers = parser.add_subparsers(help='sub-command help')
#
# Specific sub-command parsers
#
cmd1_parser = subparsers.add_parser('cmd1', help='The first sub-command')
cmd2_parser = subparsers.add_parser('cmd2', help='The second sub-command')
#
# Assign the execution functions
#
cmd1_parser.set_defaults(func=func1)
cmd2_parser.set_defaults(func=func2)
#
# Add the common options
#
for cmd_parser in [cmd1_parser, cmd2_parser]:
cmd_parser.add_argument('-n', '--name', default='host', help='Name of the user')
cmd_parser.add_argument('-num', '--number', required=True, help='Number to report')
#
# Add command-specific options
#
cmd2_parser.add_argument('-g', '--greet', default='hello', help='Greeting to use')
#
# Parse the arguments
#
args = parser.parse_args()
#
# Invoke the function
#
args.func(args)
```
Example output:
```
$ python ./top.py cmd1 -n Mark -num 3
Hi there Mark, it is nice to meet you! You chose 3.
$ python ./top.py cmd2 -n Bob -num 7 -g Hello
Hello there Bob, it is nice to meet you! You chose 7.
```
And, of course, the help functions work for each of the sub-commands.
```
$ python ./top.py cmd2 -h
usage: top.py cmd2 [-h] [-n NAME] -num NUMBER [-g GREET]
optional arguments:
-h, --help show this help message and exit
-n NAME, --name NAME Name of the user
-num NUMBER, --number NUMBER
Number to report
-g GREET, --greet GREET
Greeting to use
```
Upvotes: 1
|
2018/03/15
| 1,814 | 5,809 |
<issue_start>username_0: I know that the first letters printed are b c d e f. Why is 'g' not printed? when I is 5, 'f' is printed. The i is decremented to 4 so it should enter the for loop. Instead it doesn't enter despite i being less than 6 (strlen(arr)-1=6).
```
char* arr = "abcdefg"; //String
int i;
for (i = 1; i < strlen(arr)-1; i+=2) //i is incremented by 2.
{
printf("%c ", arr[i--]); //Here i is decremented
}
```
return 0;<issue_comment>username_1: If I put your first block of code in a file, I can import it into a `ipython` session and run your 2 functions:
```
In [2]: import stack49311085 as app
In [3]: app.simple
Out[3]: stack49311085.simple
```
`ipython` tab expansion (which uses some form of `inspect`) shows me that the module has a `simple` class, and the class itself has two static functions.
I can call `func1`, and get an `argparse` error message:
```
In [4]: app.simple.func1()
usage: ipython3 [-h] [-n NAME] -num NUMBER
ipython3: error: the following arguments are required: -num/--number
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
```
Similarly for `func2`:
```
In [7]: app.simple.func2()
usage: ipython3 [-h] [-n NAME] -num NUMBER [-g GREET]
ipython3: error: the following arguments are required: -num/--number
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
```
`parse_args` as a default parses the `sys.argv[1:]` list, which obviouslly is not tailored to its requirements.
```
def foo(argv=None):
parser = ....
....
args = parse.parse_args(argv=argv)
return args
```
is a more useful wrapper. With this I can pass a test `argv` list, and get back the parsed `Namespace`. If I don't give it such a list, it will used the `sys.argv` default. When testing a parser I like to return and/or display the whole `Namespace`.
I haven't used `inspect` enough to try to figure out what you are trying to do with it, or how to correct it. You don't need `inspect` to run code in an imported module like this.
---
I can test your imported parser by modifying the `sys.argv`
```
In [8]: import sys
In [9]: sys.argv
Out[9]:
['/usr/local/bin/ipython3',
'--pylab',
'--nosep',
'--term-title',
'--InteractiveShellApp.pylab_import_all=False']
In [10]: sys.argv[1:] = ['-h']
In [11]: app.simple.func2()
usage: ipython3 [-h] [-n NAME] -num NUMBER [-g GREET]
optional arguments:
-h, --help show this help message and exit
-n NAME, --name NAME name of the user
-num NUMBER, --number NUMBER
choose a number
-g GREET, --greet GREET
say greetings
An exception has occurred, use %tb to see the full traceback.
SystemExit: 0
```
Or following the `help`:
```
In [12]: sys.argv[1:] = ['-num=42', '-nPaul', '-gHI']
In [13]: app.simple.func2()
...
---> 30 print("{} there {}, it's nice to meet you! you chose {}".format(args['greet'], args['name'], args['age']))
KeyError: 'age'
```
Oops, there's an error in your code. You ask for `args['age']`, but didn't define a parser argument with that name. That's part of why I like to print the `args` Namespace` - to make sure it is setting all the attributes that I expect.
---
Normally we don't use different parsers for different inputs. It's possible to do that based on your own test of `sys.avgv[1]`, but keep in mind that that string will still be on `sys.argv[1:]` list that your parser(s) read. Instead write one parser that can handle the various styles of input. The `subparser` mentioned in the other answer is one option. Another is to base your action on the value of the `args.greet` attribute. If not used it will be the default value.
Upvotes: 0 <issue_comment>username_2: You probably want to use [sub-commands](https://docs.python.org/3/library/argparse.html#sub-commands). Here is an implementation of your example using sub-commands.
```
import argparse
def func1(args):
print("Hi there {}, it is nice to meet you! You chose {}.".format(args.name, args.number))
def func2(args):
print("{} there {}, it is nice to meet you! You chose {}.".format(args.greet, args.name, args.number))
#
# The top-level parser
#
parser = argparse.ArgumentParser('top.py', description='An example sub-command implementation')
#
# General sub-command parser object
#
subparsers = parser.add_subparsers(help='sub-command help')
#
# Specific sub-command parsers
#
cmd1_parser = subparsers.add_parser('cmd1', help='The first sub-command')
cmd2_parser = subparsers.add_parser('cmd2', help='The second sub-command')
#
# Assign the execution functions
#
cmd1_parser.set_defaults(func=func1)
cmd2_parser.set_defaults(func=func2)
#
# Add the common options
#
for cmd_parser in [cmd1_parser, cmd2_parser]:
cmd_parser.add_argument('-n', '--name', default='host', help='Name of the user')
cmd_parser.add_argument('-num', '--number', required=True, help='Number to report')
#
# Add command-specific options
#
cmd2_parser.add_argument('-g', '--greet', default='hello', help='Greeting to use')
#
# Parse the arguments
#
args = parser.parse_args()
#
# Invoke the function
#
args.func(args)
```
Example output:
```
$ python ./top.py cmd1 -n Mark -num 3
Hi there Mark, it is nice to meet you! You chose 3.
$ python ./top.py cmd2 -n Bob -num 7 -g Hello
Hello there Bob, it is nice to meet you! You chose 7.
```
And, of course, the help functions work for each of the sub-commands.
```
$ python ./top.py cmd2 -h
usage: top.py cmd2 [-h] [-n NAME] -num NUMBER [-g GREET]
optional arguments:
-h, --help show this help message and exit
-n NAME, --name NAME Name of the user
-num NUMBER, --number NUMBER
Number to report
-g GREET, --greet GREET
Greeting to use
```
Upvotes: 1
|
2018/03/15
| 988 | 3,915 |
<issue_start>username_0: I configured Firebase Remote Config A/B testing for Android, and we did rollout on at least 10K devices.
For some reason, I see "0 users" in my A/B test after more than 24 hours.
Firebase GMS version is: 11.8.0
Should it show A/B participants in real-time or it's ok to see 0 users after 24 hours?
P.S: We are able to get AB test variants on test devices through Firebase Instance Id, it works well.
The simplest experiment which is running has only app package as a target, with no additional filters. And it shows 0 users as well.<issue_comment>username_1: **Finally, we found an answer!**
Maybe somebody will find it helpful:
For now, it happens (no data in Firebase remote config A/B test experiment) if you have an activation event configured for A/B test experiment.
If you have 2 different experiments, both will fail to get results even if you have "activation event" configured only in 1 of them.
Additionally, remote config will not work as well, you'll be able to get only default values.
We already reported to Google about, so they'll fix it at some point I hope.
**Another useful info which is really hard to get:**
* How long is it ok to see "0 Total Users" in experiment I've just
started?
It takes many hours before you can see any data in your experiment. We were able to see results only after 21 hours after experiment start, so if you configured everything well, don't worry and wait for at least 24 hours. It will show 0 "Total Users" for many hours after the start.
* Should I use app versionName or versionCode in "Version" field of
experiment setup?
You should use versionName.
**Some useful info from support:**
* Firebase SDK
Make sure your users have the version of your app with the latest SDK.
* Since your experiment is with Remote Config
When activateFetched() is called, all events from that point on will be tagged with the experiment. If you have a goal or activation event that happens before activateFetched(), such as automatic events like first\_open, session\_start, etc., the experiment setup might be wrong.
* Are you using an Activation Event?
Make sure to call fetch() and activateFetched() before the activation event occurs.
* Experiment ID of the experiments (if support asks you about)
It's the number at the end of the URL while viewing experiment results.
**[This debugging log](https://firebase.google.com/docs/analytics/android/events#view_events_in_the_android_studio_debug_log)** could be useful to get what is going on
**Also:**
The good way to check if your experiment is working now is to set it to a specific version you didn't publish yet and check logs from remote config with the fresh app install(or erase all app data & restart).
It should show different variant every time you reinstall the app, since your Firebase Instance ID changes after app reinstall/app data erase.
If you see variants change - then A/B test is running well.
*In your "build.graddle": don't forget to set the same versionName which you set in experiment setup.*
Upvotes: 5 <issue_comment>username_2: In my case, I was receiving results of A/B testing but suddenly, it stopped to appear. It had continued for 7 days and then results appeared. Firebase Support manager said:
>
> what I suspected here is just a delay in showing the result in the
> experiments
>
>
>
Additionally, she said that
>
> With that, I would suggest always using the latest SDK version and
> enabling Google Analytics data sharing.
>
>
>
In my case, I used I wasn't using the latest SDK version, but Google Analytics was enabled for "Benchmarking", "Technical Support", "Account Specialists" except for "Google products & services". I believe these settings were enabled by default (the screenshot from Google Analytics):
[](https://i.stack.imgur.com/jfgN9.png)
Upvotes: 0
|
2018/03/15
| 781 | 2,349 |
<issue_start>username_0: I have to write a pseudo code for an algorithm which sort n 2D points which are randomly distributed within the unit circle. I need to sort them by their distance from the origin 0,0 ( by Increasing Eucledian distance to the circle's origin).
So I think they should be as a 2D array or a Map:
A = { (0.3 , 0.4) , (0.5 , 0.7) , (0.2 , 0.1) , (0.32 , 0.9) }
So here n = 4, we have 4 2D points.
First I thought to calculate the distance from origin (0,0) so (d = sqrt( (A[i].x)^2 + (A[i].y)^2) ) for each point, creating a 1-D array which can be easily sorted using any sorting algorithm, but then I found out that it can be sorted but I cannot have the 2D array sorted in the end, since in the end I just know the d(distance).
Can someone give me a hint on how can I start or explain to me the basic points which I need to go through in order to sorts shuch a system?<issue_comment>username_1: This question basically boils down to a normal sorting problem. You just need to use the appropriate comparison function.
In c++ for example you could achieve this by storing the x and y values inside a struct and then overloading the `<` operator.
```
struct coor
{
float x;
float y;
bool operator < (const coor &lhs)
{
return (x * x + y * y < lhs.x * lhs.x + lhs.y * lhs.y);
}
};
```
and then `std::sort` will work on a vector of coor structs.
Upvotes: 1 <issue_comment>username_2: In C++ you can store points as a `vector` of `pairs`. And use build-in `sort` function:
```
#include
#include
#include
using namespace std;
bool my\_comp(const pair& point1, const pair& point2)
{
return point1.first\*point1.first + point1.second\*point1.second < point2.first\*point2.first + point2.second\*point2.second;
}
int main()
{
vector< pair > points;
points.push\_back( make\_pair(3.1, 4.1) );
points.push\_back( make\_pair(0.9, 0.8) );
points.push\_back( make\_pair(1.0, 1.0) );
sort(points.begin(), points.end(), my\_comp);
for(int i=0; i<3; i++) {
cout<
```
Or, if you want to program in python, same goes for here. Use array of arrays and use build-in sort function:
```
def my_comp(point1):
return point1[0]*point1[0] + point1[1]*point1[1]
points = [ [3.1, 4.1], [0.9, 0.8], [1.0, 1.0] ]
points.sort(key=my_comp)
print(points)
```
Upvotes: 0
|
2018/03/15
| 728 | 2,132 |
<issue_start>username_0: I have 4 tables in SQL Server 2012. This is my diagram:
[](https://i.stack.imgur.com/Q7tG4.png)
I have this query:
```
SELECT
pc.Product_ID, d.Dept_ID, c.Category_ID, sc.SubCategory_ID
FROM
dbo.ProductConfiguration pc
INNER JOIN
dbo.SubCategory sc ON sc.SubCategory_ID = pc.SubCategory_ID
INNER JOIN
dbo.Category c ON c.Category_ID = sc.Category_ID
INNER JOIN
dbo.Department d ON d.Dept_ID = c.Dept_ID
WHERE
pc.Product_ID = 459218
```
What is the best way, (INNER, LEFT, RIGHT) to get columns values? I need be careful with performance
Thanks a lot<issue_comment>username_1: This question basically boils down to a normal sorting problem. You just need to use the appropriate comparison function.
In c++ for example you could achieve this by storing the x and y values inside a struct and then overloading the `<` operator.
```
struct coor
{
float x;
float y;
bool operator < (const coor &lhs)
{
return (x * x + y * y < lhs.x * lhs.x + lhs.y * lhs.y);
}
};
```
and then `std::sort` will work on a vector of coor structs.
Upvotes: 1 <issue_comment>username_2: In C++ you can store points as a `vector` of `pairs`. And use build-in `sort` function:
```
#include
#include
#include
using namespace std;
bool my\_comp(const pair& point1, const pair& point2)
{
return point1.first\*point1.first + point1.second\*point1.second < point2.first\*point2.first + point2.second\*point2.second;
}
int main()
{
vector< pair > points;
points.push\_back( make\_pair(3.1, 4.1) );
points.push\_back( make\_pair(0.9, 0.8) );
points.push\_back( make\_pair(1.0, 1.0) );
sort(points.begin(), points.end(), my\_comp);
for(int i=0; i<3; i++) {
cout<
```
Or, if you want to program in python, same goes for here. Use array of arrays and use build-in sort function:
```
def my_comp(point1):
return point1[0]*point1[0] + point1[1]*point1[1]
points = [ [3.1, 4.1], [0.9, 0.8], [1.0, 1.0] ]
points.sort(key=my_comp)
print(points)
```
Upvotes: 0
|
2018/03/15
| 2,160 | 4,098 |
<issue_start>username_0: i need help with plotting the dataframe below in a bargraph which i'll add as well.
```
Month Base Advanced
2008-01-01 20.676043 20.358472
2008-02-01 -57.908706 -62.368464
2008-03-01 -3.130082 -5.876791
2008-04-01 20.844747 14.162446
2008-05-01 39.882740 42.315828
2008-06-01 -12.802920 -13.333419
2008-07-01 -49.299693 -39.843041
2008-08-01 -4.563942 10.995445
2008-09-01 -100.018700 -77.054218
2008-10-01 -42.056913 -30.485998
```
My current code which isnt working great:
```
ggplot(ResidualsDataFrame,aes(x=Base,y=Advanced,fill=factor(Month)))+
geom_bar(stat="identity",position="dodge")+
scale_fill_discrete(name="Forecast",breaks=c(1, 2),
labels=c("Base", "Advanced"))+
xlab("Months")+ylab("Forecast Error")
```
This is what I'm trying to make.
Any help is kindly appreciated.
<issue_comment>username_1: One trick that helps is to change the data from "wide" to "long". Continuing with the `tidyverse` (since you're using `ggplot2`):
```
library(dplyr)
library(tidyr)
library(ggplot2)
x %>%
gather(ty, val, -Month)
# Month ty val
# 1 2008-01-01 Base 20.676043
# 2 2008-02-01 Base -57.908706
# 3 2008-03-01 Base -3.130082
# 4 2008-04-01 Base 20.844747
# 5 2008-05-01 Base 39.882740
# 6 2008-06-01 Base -12.802920
# 7 2008-07-01 Base -49.299693
# 8 2008-08-01 Base -4.563942
# 9 2008-09-01 Base -100.018700
# 10 2008-10-01 Base -42.056913
# 11 2008-01-01 Advanced 20.358472
# 12 2008-02-01 Advanced -62.368464
# 13 2008-03-01 Advanced -5.876791
# 14 2008-04-01 Advanced 14.162446
# 15 2008-05-01 Advanced 42.315828
# 16 2008-06-01 Advanced -13.333419
# 17 2008-07-01 Advanced -39.843041
# 18 2008-08-01 Advanced 10.995445
# 19 2008-09-01 Advanced -77.054218
# 20 2008-10-01 Advanced -30.485998
```
So plotting it is a little simpler:
```
x %>%
gather(ty, val, -Month) %>%
ggplot(aes(x=Month, weight=val, fill=ty)) +
geom_bar(position = "dodge") +
theme(legend.position = "top", legend.title = element_blank())
```
[](https://i.stack.imgur.com/2Fl12.png)
The data used:
```
x <- read.table(text=' Month Base Advanced
2008-01-01 20.676043 20.358472
2008-02-01 -57.908706 -62.368464
2008-03-01 -3.130082 -5.876791
2008-04-01 20.844747 14.162446
2008-05-01 39.882740 42.315828
2008-06-01 -12.802920 -13.333419
2008-07-01 -49.299693 -39.843041
2008-08-01 -4.563942 10.995445
2008-09-01 -100.018700 -77.054218
2008-10-01 -42.056913 -30.485998', header=TRUE, stringsAsFactors=FALSE)
x$Month <- as.Date(x$Month, format='%Y-%m-%d')
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Without easy access to your data to reproduce this, all I can do is provide some examples from one of the datasets I work with, so hopefully this will be useful. Method 1: ts.plot; Method 2: Plotly; Method 3: ggplot.
Method 1: I want to plot V17 & V18 together:
```
ts.plot(c(data1t$V17), gpars=list(col=c("black"), ylab="msec")) # first series
lines(data1t$V18,col="red") # second
```
[](https://i.stack.imgur.com/7iFNz.jpg)
Method 2: Plotly; V29 contains my x-coordinates for both V17 and V18
```
library(plotly)
plot_ly(x=~data1t$V29, mode='lines') %>%
add_lines(y=~data1t$V17,
line=list(color='rgb(205,12,24')) %>%
add_lines(y=~data1t$V18,
line=list(color='rgb(12,24,205'))
```
[](https://i.stack.imgur.com/cdgFE.jpg)
Method 3: ggplot; V29 contains my x-coordinates for both V17 and V18
```
data1t %>% arrange(V29) %>%
ggplot(aes(x=V29,y=value,color=variable)) +
geom_line(aes(y=V17,col='spkts')) +
geom_line(aes(y=V18,col='dpkts',
alpha=0.5))
```
[](https://i.stack.imgur.com/62ymY.jpg)
Upvotes: 0
|
2018/03/15
| 1,880 | 3,992 |
<issue_start>username_0: My goal is to have a slider refill itself when someone tries to lower itself but for some reason none of the code in my if statement will run.
```js
//Keeps the slider full
var slider = document.getElementById("myRange"); //Creates variable for slider position
var output = document.getElementById("demo"); //Creates variable for outputting Value
output.innerHTML= ("Value:") + slider.value;
slider.oninput = function() {
output.innerHTML = ("Value:") + slider.value;
};
//Raises the Value of slider if it isnt full
if(slider.value != 100){
console.log("Slider isnt full");
slider.value++;
};
```
```html
Loading Title...
================
My Awesomness:
--------------
```
PS: I have also tried a while loop but thast didnt work so for now im just trying to get an if statement to work<issue_comment>username_1: One trick that helps is to change the data from "wide" to "long". Continuing with the `tidyverse` (since you're using `ggplot2`):
```
library(dplyr)
library(tidyr)
library(ggplot2)
x %>%
gather(ty, val, -Month)
# Month ty val
# 1 2008-01-01 Base 20.676043
# 2 2008-02-01 Base -57.908706
# 3 2008-03-01 Base -3.130082
# 4 2008-04-01 Base 20.844747
# 5 2008-05-01 Base 39.882740
# 6 2008-06-01 Base -12.802920
# 7 2008-07-01 Base -49.299693
# 8 2008-08-01 Base -4.563942
# 9 2008-09-01 Base -100.018700
# 10 2008-10-01 Base -42.056913
# 11 2008-01-01 Advanced 20.358472
# 12 2008-02-01 Advanced -62.368464
# 13 2008-03-01 Advanced -5.876791
# 14 2008-04-01 Advanced 14.162446
# 15 2008-05-01 Advanced 42.315828
# 16 2008-06-01 Advanced -13.333419
# 17 2008-07-01 Advanced -39.843041
# 18 2008-08-01 Advanced 10.995445
# 19 2008-09-01 Advanced -77.054218
# 20 2008-10-01 Advanced -30.485998
```
So plotting it is a little simpler:
```
x %>%
gather(ty, val, -Month) %>%
ggplot(aes(x=Month, weight=val, fill=ty)) +
geom_bar(position = "dodge") +
theme(legend.position = "top", legend.title = element_blank())
```
[](https://i.stack.imgur.com/2Fl12.png)
The data used:
```
x <- read.table(text=' Month Base Advanced
2008-01-01 20.676043 20.358472
2008-02-01 -57.908706 -62.368464
2008-03-01 -3.130082 -5.876791
2008-04-01 20.844747 14.162446
2008-05-01 39.882740 42.315828
2008-06-01 -12.802920 -13.333419
2008-07-01 -49.299693 -39.843041
2008-08-01 -4.563942 10.995445
2008-09-01 -100.018700 -77.054218
2008-10-01 -42.056913 -30.485998', header=TRUE, stringsAsFactors=FALSE)
x$Month <- as.Date(x$Month, format='%Y-%m-%d')
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Without easy access to your data to reproduce this, all I can do is provide some examples from one of the datasets I work with, so hopefully this will be useful. Method 1: ts.plot; Method 2: Plotly; Method 3: ggplot.
Method 1: I want to plot V17 & V18 together:
```
ts.plot(c(data1t$V17), gpars=list(col=c("black"), ylab="msec")) # first series
lines(data1t$V18,col="red") # second
```
[](https://i.stack.imgur.com/7iFNz.jpg)
Method 2: Plotly; V29 contains my x-coordinates for both V17 and V18
```
library(plotly)
plot_ly(x=~data1t$V29, mode='lines') %>%
add_lines(y=~data1t$V17,
line=list(color='rgb(205,12,24')) %>%
add_lines(y=~data1t$V18,
line=list(color='rgb(12,24,205'))
```
[](https://i.stack.imgur.com/cdgFE.jpg)
Method 3: ggplot; V29 contains my x-coordinates for both V17 and V18
```
data1t %>% arrange(V29) %>%
ggplot(aes(x=V29,y=value,color=variable)) +
geom_line(aes(y=V17,col='spkts')) +
geom_line(aes(y=V18,col='dpkts',
alpha=0.5))
```
[](https://i.stack.imgur.com/62ymY.jpg)
Upvotes: 0
|
2018/03/15
| 1,472 | 5,070 |
<issue_start>username_0: I'm using scalamock to mock this class:
```
class HttpService {
def post[In, Out]
(url: String, payload: In)
(implicit encoder: Encoder[In], decoder: Decoder[Out])
: Future[Out] = ...
...
}
```
...so my test class has a mock used like this:
```
val httpService = mock[HttpService]
(httpService.post[FormattedMessage, Unit](_ : String, _ : FormattedMessage) (_ : Encoder[FormattedMessage], _: Decoder[Unit]))
.expects("http://example.com/whatever",*, *, *)
.returning(Future.successful(()))
```
Apparently I have to write the whole mock function signature. If I only put the underscores in the signature, without the corresponding types, I get errors like this one:
```
[error] missing parameter type for expanded function ((x$1: , x$2, x$3, x$4) => httpService.post[FormattedMessage, Unit](x$1, x$2)(x$3, x$4))
[error] (httpService.post[FormattedMessage, Unit](\_, \_) (\_, \_))
^
```
What I don't like about this code is that the mock expectation is used in several places in the tests and this ugly signature is repeated all over the place but with different In/Out type parameters and expectations.
So I thought I would write a class
```
class HttpServiceMock extends MockFactory {
val instance = mock[HttpService]
def post[In, Out] = instance.post[In, Out](_ : String, _ : In) (_ : Encoder[In], _: Decoder[Out])
}
```
...and use it like this:
```
val httpService = new HttpServiceMock()
...
httpService.post[FormattedMessage, Unit]
.expects("http://example.com/whatever",*, *, *)
.returning(Future.successful(()))
```
...which compiles fine but when I run the tests I get the following error:
```
java.lang.NoSuchMethodException: com.myapp.test.tools.HttpServiceMock.mock$post$0()
at java.lang.Class.getMethod(Class.java:1786)
at com.myapp.controllers.SlackControllerSpec.$anonfun$new$3(SlackControllerSpec.scala:160)
at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.WordSpecLike$$anon$1.apply(WordSpecLike.scala:1078)
at org.scalatest.TestSuite.withFixture(TestSuite.scala:196)
at org.scalatest.TestSuite.withFixture$(TestSuite.scala:195)
```
How can I fix this error? Are there other ways to avoid the re-writing of the mocked function signature over and over again?
UPDATE:
In the end the mock looks like this:
```
trait HttpServiceMock extends MockFactory {
object httpService {
val instance = mock[HttpService]
def post[In, Out] = toMockFunction4(instance.post[In, Out](_: String, _: In)(_: Encoder[In], _: Decoder[Out]))
}
}
```<issue_comment>username_1: Don't use Scalamock, make `HttpService` a trait and implement the trait directly to mock whatever you need. E.g. (you can paste this in the Scala REPL, but remember to press `Enter` and `Ctrl`+`D` at the end):
```
:rese
:pa
import scala.concurrent.Future
trait Encoder[A]
trait Decoder[A]
// HttpService.scala
trait HttpService {
def post[In: Encoder, Out: Decoder](
url: String, payload: In): Future[Out]
}
object HttpService extends HttpService {
override def post[In: Encoder, Out: Decoder](
url: String,
payload: In):
Future[Out] = ???
}
// HttpServiceSpec.scala
class Mock[Out](result: Future[Out]) extends HttpService {
override def post[In: Encoder, Out: Decoder](
url: String,
payload: In):
Future[Out] =
// This is fine because it's a mock.
result.asInstanceOf[Future[Out]]
}
```
Upvotes: 0 <issue_comment>username_2: You can use the below code:
```
trait HttpMockSupport {
this: MockFactory =>
val httpService = mock[HttpService]
def prettyPost[In, Out]: MockFunction4[String, In, Encoder[In], Decoder[Out], Future[Out]] = {
toMockFunction4(httpService.post[In, Out](_: String, _: In)(_: Encoder[In], _: Decoder[Out]))
}
}
class AClassThatNeedsHttpServiceMocking extends FreeSpec with Matchers with MockFactory with HttpMockSupport {
"HttpService should post" in {
val url = "http://localhost/1"
val input = "input"
implicit val encoder: Encoder[String] = new Encoder[String] {}
implicit val decoder: Decoder[String] = new Decoder[String] {}
prettyPost[String, String]
.expects(url, input, encoder, decoder)
.returns(Future.successful("result"))
httpService.post(url, input)
}
}
```
It puts the common mocking in a trait that can be extended in all the places that needs to mock HttpService and just call the non-ugly method :)
**Update 1:**
Updated it to accept the expected parameters.
**Update 2:**
Updated the prettyPost method to be generic so that we can set any kind of expectations.
Scalamock expects a MockFunctionX. So, in your case, all you have to do is to convert the ugly function to a pretty function and then convert it to a MockFunctionX.
Upvotes: 2 [selected_answer]
|
2018/03/15
| 1,696 | 6,004 |
<issue_start>username_0: It seems the [COCO PythonAPI](https://github.com/cocodataset/cocoapi) only support python2. But peoples do use it in python3 environment.
I tried possible methods to install it, like
```
python3 setup.py build_ext --inplace
python3 setup.py install
```
But `python3 setup.py install` will fail due to `coco.py` and `cocoeval.py` containning python2 print function.
Update: solved by updating the [COCO PythonAPI](https://github.com/cocodataset/cocoapi) project. Leave this question for people facing the same issue.<issue_comment>username_1: Try the following steps:
1. Use git clone to clone the folder into your drive. In this case, it should be `git clone https://github.com/cocodataset/cocoapi.git`
2. Use terminal to enter the directory, or open a terminal inside the directory
3. Type in `2to3 . -w`. Note that you might have to install a package to get 2to3. It is an elegant tool to convert code from Python2 to Python3; this code converts all .py files from Python2-compatible to Python3-compatible
4. Use terminal to navigate to the setup folder
5. Type in `python3 setup.py install`
This should help you install COCO or any package intended for Python2, and run the package using Python3. Cheers!
Upvotes: 5 [selected_answer]<issue_comment>username_2: There are alternative versions of the cocoapi that you can download and use too (I'm using python 3.5). Here's a solution that you might want to try out:
[How to download and use object detection datasets (e.g. coco or pascal)](https://stackoverflow.com/questions/50807254/how-to-download-and-use-object-detection-datasets-e-g-coco-or-pascal/52325290#52325290)
Upvotes: 0 <issue_comment>username_3: ### Install
1. Instead of the official version (which has issues with python 3) use an [alternative one](https://github.com/philferriere/cocoapi). Install it on your local machine, *globally* (i.e., outside any virtual environment). You can do this by:
`pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI`
2. Check if it is installed globally:
`pip freeze | grep "pycocotools"`
You should see something like `pycocotools==2.0.0` in your output.
3. Now, inside your virtual-env (conda or whatever), first install `numpy` and `cython` (and maybe `setuptools` if it's not installed) using *pip*, and then:
`pip install pycocotools`
### Verify
Inside your project, import (for example) `from pycocotools import mask as mask` and then `print(mask.__author__)`. This should print out the author's name, which is *tsungyi*.
### Where Is It?
The installed package, like any other packages that are locally installed inside a virtual-env using *pip*, will go to *External Libraries* of your project, under *site-packages*. That means it is now part of your virtual-env and not part of your project. So, other users who may want to use your code, must repeat this installation on their virtual-env as well.
---
**Troubleshooting:**
The main source of confusion is that either **you did not install the required packages** before installing cocoapi, or **you did install the required packages but for a different python version**. And when you want to check if something is installed, you may check with, for instance, python**3.6** and see that it exists, but you are actually running all your commands with python**3.7**. So suppose you are using **python3.7**. You need to make sure that:
1. `python -V` gives you *python3.7* and NOT other version, and `pip -V` gives you `pip 19.2.3 from /home//.local/lib/python3.7/site-packages/pip (python3.7)`, that actually matches with your default python version. If this is not the case, you can change your default python using `sudo update-alternatives --config python`, and following the one-step instruction.
2. All the required packages are installed using the right *python* or *pip* version. You can check this using `pip` and `pip3` to stop any differences that may cause an issue:
`pip freeze | grep ""` or `pip show` for more recent versions of *pip*.
3. To install the required packages, after you made sure about (1), you need to run:
`sudo apt install python-setuptools python3.7-dev python3-wheel build-essential` and `pip install numpy cython matplotlib`
---
**Environment:**
The above steps were tested on *Ubuntu 18.4*, *python 3.6.8*, *pip 19.0.3*.
Upvotes: 3 <issue_comment>username_4: I have completed it with a simple step
```
pip install "git+https://github.com/philferriere/cocoapi.git#egg=pycocotools&subdirectory=PythonAPI"
```
\*\* before that you need to install Visual C++ 2015 build tools on your path
[](https://i.stack.imgur.com/WjhCO.png)
Upvotes: 3 <issue_comment>username_5: If you are struggling building pycocotools on Ubuntu 20.04 and python3.7
try this:
```
sudo apt-get install -y python3.7-dev
python3.7 -m pip install pycocotools>=2.0.1
```
Upvotes: 1 <issue_comment>username_6: here's how i did successfully! (the reason is the gcc version)
1. install the dependencies: cython (pip install cython), opencv (pip install opencv-python)
2. check the gcc version by this command: gcc --version
3. your output will be like this 'Command 'gcc' not found, but can be installed with:
sudo apt install gcc
'
4. Type the below commands to install the gcc:
sudo apt update
sudo apt install build-essential
sudo apt-get install manpages-dev
5. now check again the gcc version(step2)
if you get below output
'gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.'
6. now run the code for pycocotools installations:
pip install "git+https://github.com/philferriere/cocoapi.git#egg=pycocotools&subdirectory=PythonAPI"
7. finally wait check if the installation is successful :
'Successfully installed pycocotools-2.0'
Upvotes: 0
|
2018/03/16
| 317 | 1,161 |
<issue_start>username_0: I am trying change divs of outside html. With jquery each function. But my code doesn't work.
It must change spans's html to ExampleDesc1, ExampleDesc2, ExampleDesc3...
```
var desc;
$('.example1 a').each(function () {
var desc = $(this).attr('description');
$('.example2 span').each(function () {
$(this).html(desc);
});
});
```
Where is the mistake?<issue_comment>username_1: Along with adding an `=` between the `description` attribute and value. Rather than an inner loop, just use the index you are on by using `$('.example2 span').eq(i)` when you iterate over `.example1`. Assuming they have the same number of children:
```js
$('.example1 a').each(function (i) {
var desc = $(this).attr('description');
$('.example2 span').eq(i).html(desc);
});
```
```html
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Inner loop (`$('.example2 span').each`) executes once every iteration of the outer loop (`$('.example1 a').each`). So the content of 's will always be the description of the last .
In order to to what you are trying to do, you should assign a different ID to each span.
Upvotes: 0
|
2018/03/16
| 1,981 | 6,259 |
<issue_start>username_0: ```
public class ArrayTest{
public static void main(String[] args) {
int array[] = {32,3,3,4,5,6,88,98,9,9,9,9,9,9,1,2,3,4,5,6,4,3,7,7,8,8,88,88};
for(int i= 0;i
```
this program work fine but when i compile it, it print the values again and again i want to print the duplicate value once for example if 9 is present 5 times in array so it print 9 once and if 5 is present 6 times or more it simply print 5...and so on....this what i want to be done. but this program not behave like that so what am i missing here.
your help would be highly appreciated.
regards!<issue_comment>username_1: Sort the array. Look at the one ahead to see if it is duplicate. Also look at one behind to see if this was already counted as duplicate (except when i == 0, do not look back).
```
import java.util.Arrays;
public class ArrayTest{
public static void main(String[] args) {
int array[] = {32,32,3,3,4,5,6,88,98,9,9,9,9,9,9,1,2,3,4,5,6,4,3,7,7,8,8,88,88};
Arrays.sort(array);
for(int i= 0;i
```
prints:
```
element occuring twice are:3
element occuring twice are:4
element occuring twice are:5
element occuring twice are:6
element occuring twice are:7
element occuring twice are:8
element occuring twice are:9
element occuring twice are:32
element occuring twice are:88
```
Upvotes: -1 <issue_comment>username_2: Sort the array so you can get all the like values together.
```
public class ArrayTest{
public static void main(String[] args) {
int array[] = {32,3,3,4,5,6,88,98,9,9,9,9,9,9,1,2,3,4,5,6,4,3,7,7,8,8,88,88};
Arrays.sort(array);
for (int a = 0; a < array.length-1; a++) {
boolean duplicate = false;
while (array[a+1] == array[a]) {
a++;
duplicate = true;
}
if (duplicate) System.out.println("Duplicate is " + array[a]);
}
}
}
```
Upvotes: 0 <issue_comment>username_3: I recommend using a `Map` to determine whether a value has been duplicated.
Values that have occurred more than once would be considered as duplicates.
P.S. For duplicates, using a `set` abstract data type would be ideal (`HashSet` would be the implementation of the ADT), since lookup times are O(1) since it uses a hashing algorithm to map values to array indexes. I am using a map here, since we already have a solution using a `set`. In essence, apart from the data structure used, the logic is almost identical.
For more information on the map data structure, click [here](https://www.quora.com/What-is-a-map-data-structure-How-does-it-store-data).
Instead of writing nested loops, you can just write two for loops, resulting in a solution with linear time complexity.
```
public void printDuplicates(int[] array) {
Map numberMap = new HashMap<>();
// Loop through array and mark occurring items
for (int i : array) {
// If key exists, it is a duplicate
if (numberMap.containsKey(i)) {
numberMap.put(i, numberMap.get(i) + 1);
} else {
numberMap.put(i, 1);
}
}
for (Integer key : numberMap.keySet()) {
// anything with more than one occurrence is a duplicate
if (numberMap.get(key) > 1) {
System.out.println(key + " is a reoccurring number that occurs " + numberMap.get(key) + " times");
}
}
}
```
Assuming that the code is added to `ArrayTest` class, you could all it like this.
```
public class ArrayTest {
public static void main(String[] args) {
int array[] = {32,3,3,4,5,6,88,98,9,9,9,9,9,9,1,2,3,4,5,6,4,3,7,7,8,8,88,88};
ArrayTest test = new ArrayTest();
test.printDuplicates(array);
}
}
```
If you want to change the code above to look for numbers that reoccur exactly twice (not more than once), you can change the following code
`if (numberMap.get(key) > 1)` to `if (numberMap.get(key) == 2)`
Note: this solution takes O(n) memory, so if memory is an issue, Ian's solution above would be the right approach (using a nested loop).
Upvotes: 0 <issue_comment>username_4: The problem statement is not clear, but lets assume you can't sort (otherwise the problem greatly simplifies). Lets also assume the space complexity is constrained, and you can't keep a Map, etc, for counting the frequency.
You can use use lookbehind, but this unnecessarily increases the time complexity.
I think a reasonable approach is to reserve the value -1 to indicate that an array position has been processed. As you process the array, you update each active value with -1. For example, if the first element is 32, then you scan the array for any value 32, and replace with -1. The time complexity does not exceed O(n^2).
This leaves the awkcase case where -1 is an actual value. It would be required to do a O(n) scan for -1 prior to the main code.
If the array must be preserved, then clone it prior to processing. The O(n^2) loop is:
```
for (int i = 0; i < array.length - 1; i++) {
boolean multiple = false;
for (int j = i + 1; j < array.length && array[i] != -1; j++) {
if (array[i] == array[j]) {
multiple = true;
array[j] = -1;
}
}
if (multiple)
System.out.println("element occuring multiple times is:" + array[i]);
}
```
Upvotes: 0 <issue_comment>username_5: What you can do, is use a data structure that only contains unique values, Set. In this case we use a HashSet to store all the duplicates. Then you check if the Set contains your value at index i, if it does not then we loop through the array to try and find a duplicate. If the Set contains that number already, we know it's been found before and we skip the second for loop.
```
int array[] = {32,3,3,4,5,6,88,98,9,9,9,9,9,9,1,2,3,4,5,6,4,3,7,7,8,8,88,88};
HashSet duplicates = new HashSet<>();
for(int i= 0;i
```
Outputs
```
[3, 4, 5, 6, 7, 88, 8, 9]
```
Upvotes: 0 <issue_comment>username_6: ```
// print duplicates
StringBuilder sb = new StringBuilder();
int[] arr = {1, 2, 3, 4, 5, 6, 7, 2, 3, 4};
int l = arr.length;
for (int i = 0; i < l; i++)
{
for (int j = i + 1; j < l; j++)
{
if (arr[i] == arr[j])
{
sb.append(arr[i] + " ");
}
}
}
System.out.println(sb);
```
Upvotes: 0
|
2018/03/16
| 737 | 2,241 |
<issue_start>username_0: According to the [rowversion docs](https://learn.microsoft.com/en-us/sql/t-sql/data-types/rowversion-transact-sql)
>
> Each database has a counter that is incremented for each insert or update operation that is performed on a table that contains a rowversion column within the database.
>
>
>
however this 'increment' skips an integer when looping back from FF to 01. e.g.
```
0x00000000000007FF
0x0000000000000801
```
To reproduce, create a table
```
CREATE TABLE [dbo].[TestTable](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[SomeData] [varchar](200) NOT NULL,
[RowVersion] [rowversion] not null
) ON [PRIMARY]
```
Now add some inserts:
```
DECLARE @i INT = 0
WHILE @i < 256
BEGIN
SET @i = @i + 1
INSERT INTO [TestTable] ([SomeData]) VALUES (CONVERT(VARCHAR(255), NEWID()))
END
```
view the data:
```
select * from [TestTable] order by [RowVersion] asc
```
Data will vary depending on whether you have used `rowversion`
We see in this case 2047 (0x00000000000007FF) jumps to 2049 (0x0000000000000801)
Why is this?
[](https://i.stack.imgur.com/tLM5I.png)<issue_comment>username_1: The referenced documentation mentions the rowversion value is unique, binary, and incrementing. It doesn't say there won't be gaps. The value is intended to be used for optimistic concurrency validation so gaps should not matter for that purpose.
Upvotes: 2 <issue_comment>username_2: That is internal implementation of rowversion (timestamp). That implementation allows gaps by design.
For example, let's see how DB Backup/Restore affects it.
The value of largest rowversion is stored at database boot page. That value is not changed when every new rowversion is generated. Instead, SQL engine reserves a few thousands of row versions and updates it, e.g. from 20,000 to 24,000. In case when you restore the database or if DB is offline, the range 20,000..24,000 is lost. The new row version will start from 24,000
So your code should never rely on assumption there is no gaps.
You can check the last saved value of rowverion:
```
DBCC TRACEON (3604);
DBCC DBINFO ('db-name')
```
and look for 'dbi\_maxDbTimestamp'
Upvotes: 0
|
2018/03/16
| 749 | 2,375 |
<issue_start>username_0: My SQL query isn't correctly counting Legal Description Parcel Numbers (LDPARC) and it is duplicating accounts (ACCTNO). As you likely know, ACCTNOs can have more than one LDPARCs. So the table LDPARC is in tbl\_Loan\_Legal\_Descriptions is my issue. When an account has multiple LDPARCs, I need to count the number of Parcel Numbers and simply get the count while grabbing a few other fields with my query while avoiding duplicates when I join it to my "primary" table tbl\_loan\_master. Only the first line of my Where clause is relevant to the count. I know you can't use DISTINCT with COUNT when also using PARTITION, so that may be part of the solution. I would greatly appreciate any help. Here is a slimmed down version of my query:
```
With CTE as (Select LDACCT, LDDSC1, LDDSC5, LDTA, LDTYPT, LDPARC,
COUNT(LDPARC) OVER (PARTITION BY LDPARC)
as Liens
FROM tbl_Loan_Legal_Descriptions)
Select A.ACCTNO, E.LDDSC1, E.LDDSC5, E.LDTA, E.LDTYPT, E.LDDSC5, E.LDDSC1,
E.Liens, E.LDPARC, A.CALREP, A.SNAME
From tbl_loan_master A
left outer join CTE E
On A.ACCTNO = E.LDACCT
Where
E.Liens >= 1 and
A.STATUS <> 2 and
A.QRYBAL > 0
Group By
A.ACCTNO, E.LDDSC5, E.LDTA, E.LDTYPT, E.LDPARC, A.SNAME, A.CALREP, E.LDTYPT,
E.Liens, E.LDDSC1, E.LDDSC5
Order By
A.ACCTNO
```<issue_comment>username_1: The referenced documentation mentions the rowversion value is unique, binary, and incrementing. It doesn't say there won't be gaps. The value is intended to be used for optimistic concurrency validation so gaps should not matter for that purpose.
Upvotes: 2 <issue_comment>username_2: That is internal implementation of rowversion (timestamp). That implementation allows gaps by design.
For example, let's see how DB Backup/Restore affects it.
The value of largest rowversion is stored at database boot page. That value is not changed when every new rowversion is generated. Instead, SQL engine reserves a few thousands of row versions and updates it, e.g. from 20,000 to 24,000. In case when you restore the database or if DB is offline, the range 20,000..24,000 is lost. The new row version will start from 24,000
So your code should never rely on assumption there is no gaps.
You can check the last saved value of rowverion:
```
DBCC TRACEON (3604);
DBCC DBINFO ('db-name')
```
and look for 'dbi\_maxDbTimestamp'
Upvotes: 0
|
2018/03/16
| 825 | 2,657 |
<issue_start>username_0: I simply want to know if I can make whats in the parenthesis of this if statement simpler.
```
if (augend[i]=='0' && partialSum[i]=='0' && carry=='0')
```
Such as `(augend[i] & partialsum[i] & carry == '0')`. I'm still learning so i'm not quite sure if something like this is possible.<issue_comment>username_1: Nope, that's as short it can be
Upvotes: 0 <issue_comment>username_2: if these were ints rather than characters (ie carry == 0 is what you want) then you could do
```
if (!augend[i] && !partialSum[i] && !carry)
```
Since we dont know what you are doing its hard to say. But `if (carry)` and `if(!carry)` feel quite natural if you are doing some kind or arithmetic.
Upvotes: 0 <issue_comment>username_3: 1. No, the numerical value for '0' is not `0`, but usually `48` (but that is not portable), so your bitwise trick will not work.
2. Even if bitwise tricks would be possible, **don't** do that. Leave the code as is, for it is clear and easy to understand.
What you're attempting to do is *premature micro-optimization* and ultimately a bad idea.
Upvotes: 2 <issue_comment>username_4: you can do this:
```
if(augend[i]=='0' & partialSum[i]=='0' & carry=='0'){}
```
as == has greater precedence and will return bool on which bitwise operator can be used.
Upvotes: 0 <issue_comment>username_5: You could do
```
if(!(augend[i]^'0'|partialSum[i]^'0'|carry^'0'))
```
This is an excellent way to make readers of your code, including probably future you, wonder what the \*$#@ is going on.
In case the sarcasm wasn't clear, please DON'T actually do this. (Unless this is for a "code golf" competition, in which case there would be a number of other things you could probably improve.) Your original code is much better.
Upvotes: 0 <issue_comment>username_6: ```
if (augend[i]==partialSum[i]==carry=='0')
```
or
```
(augend[i]==partialSum[i]==carry=='0') ? (if_true) : (if_false)
```
Upvotes: -1 <issue_comment>username_7: Encapsulation:
```
bool zeros() {
return true;
}
template
bool zeros(char x, chars... lst) {
return x == '0' && zeros(lst...);
}
if (zeros(augend[i], partialSum[i], carry)){
//impl//
}
//lined up just for size comparison
if (augend[i]=='0' && partialSum[i]=='0' && carry=='0')
if (zeros(augend[i], partialSum[i], carry))
```
This is more brief and becomes much much shorter the more parameters you have to compare (as you don't need to write `=='0'` each time).
---
If you need it for any char....
```
bool equal(char match) {
return true;
}
template
bool equal(char match, char front, chars... lst) {
return match == front && equal(match, lst...);
}
```
Upvotes: 1
|
2018/03/16
| 1,099 | 4,399 |
<issue_start>username_0: I have several materialized views in Oracle which I can query to get information.
Now I want to create several tables with foreign keys referencing those MVs and to do so, I have already "added" the corresponding primary keys to the MVs (as stated in [adding primary key to sql view](https://stackoverflow.com/questions/2041308/adding-primary-key-to-sql-view)).
Then, when I execute my SQL *create table* query, I get an *Oracle (ORA-02270) error: no matching unique or primary key for this column-list error* at position 0, right at the beginning...
Am I doing something wrong? Is it possible what I am trying to do?
If not, how is it usually done?<issue_comment>username_1: The [documentation](https://docs.oracle.com/database/121/SQLRF/clauses002.htm#i1002565) states that:
>
> **View Constraints**
>
>
> Oracle **does not enforce view constraints**. However, operations on
> views are subject to the integrity constraints defined on the
> underlying base tables. This means that you can enforce constraints on
> views through constraints on base tables.
>
>
>
and also:
>
> View constraints are a subset of table constraints and are subject to
> the following restrictions:
>
>
> * ...
> * **View constraints are supported only in DISABLE NOVALIDATE mode**. You cannot specify any other mode. You must specify the keyword DISABLE
> when you declare the view constraint. You need not specify NOVALIDATE
> explicitly, as it is the default.
> * ...
>
>
>
---
In practice, the above means that although constrains on views can be created, **they are blocked and do not work**. So as if they were not at all.
---
Apart from this, think for a moment what sense it would have foreign key constrainst created on tables, that would refer to a materialized view:
* tables are always "online" and have always "fresh" data
* materialized views can contain stale data
Imagine this case: You insert a record X into some table. This record is not visible in the materialized view yet, because the view is not refreshed at this moment. Then you try to insert record X into another table that has a foreign key constraint pointing to that materialized view. What the database should do ? Should the database reject the insert statement (since for now X is not visible yet in the view and the foreign key exists) ? If yes, then what about data integrity ? Mayby should it block amd wait until the view is refreshed ? Should it force the view to start refreshing or not in such a case?
As you can see, such a case involves many questions and difficult problems in the implementation, so Oracle simply does not allow for constrains on views.
Upvotes: 0 <issue_comment>username_2: When there are materialized views referenced by other tables' foreign keys, you have to take note on your views refresh method and how it affects your foreign keys.
Two things may prevent you from refreshing your materialized views:
1) The data in the tables referencing your views may reference lines that need to be updated or deleted. In that case you have to fix your data.
2) Your views' refresh method is complete. In complete refresh Oracle deletes all data in your mviews tables and repopulates them by rerunning their queries as you can see in [Oracle site documentation - Refresh Types](https://docs.oracle.com/database/121/REPLN/repmview.htm#REPLN345), while in fast refresh only the differences are applied to your mviews tables. Fast refresh is an incremental refresh and it won't work only if your foreign keys aren't respected by your data.
Now if there are mviews that can't be created with fast refresh (what Oracle calls them "Complex queries") then you can alter constraints to these mviews to deferrable as you can see [here](https://docs.oracle.com/cd/B19306_01/server.102/b14200/clauses002.htm).
That way even complete refresh will work because Oracle only validates deferrable constraints by the end of current transaction. Therefore, as long as your refresh method is atomic, Oracle will issue an DELETE and than INSERT all rows back, all in one transaction.
In other words, in the next command to refresh your mview keep parameter atomic\_refresh as true:
```
dbms_mview.refresh(LIST=>'MVIEW', METHOD =>'C', ATOMIC_REFRESH => TRUE);
```
By the way, this parameter's default value is TRUE, so just don't mention it and it will work.
Upvotes: 1
|
2018/03/16
| 1,662 | 7,173 |
<issue_start>username_0: I am getting intermittent deadlocks when using `HttpClient` to send http requests and sometimes they are never returning back to `await SendAsync` in my code. I was able to figure out the thread handling the request internally in `HttpClient`/`HttpClientHandler` for some reason has a `SynchronizationContext` during the times it is deadlocking. I would like to figure out how the thread getting used ends up with a `SynchronizationContext`, when normally they don't have one. I would assume that whatever object is causing this `SynchronizationContext` to be set is also blocking on the `Thread`, which is causing the deadlock.
Would I be able to see anything relevant in the TPL ETW events?
How can I troubleshoot this?
**Edit 2:**
The place that I have been noticing these deadlocks is in a wcf `ServiceContract`(see code below) inside of a windows service. The `SynchronizationContext` that is causing an issue is actually a `WindowsFormsSynchronizationContext`, which I assume is caused by some control getting created and not cleaned up properly (or something similar). I realize there almost certainly shouldn't be any windows forms stuff going on inside of a windows service, and I'm not saying I agree with how it's being used. However, I didn't write any of the code using it, and I can't just trivially go change all of the references.
**Edit**: here is an example of the general idea of the wcf service I was having a problem with. It's a simplified version, not the exact code:
```
[ServiceContract]
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)]
internal class SampleWcfService
{
private readonly HttpMessageInvoker _invoker;
public SampleWcfService(HttpMessageInvoker invoker)
{
_invoker = invoker;
}
[WebGet(UriTemplate = "*")]
[OperationContract(AsyncPattern = true)]
public async Task GetAsync()
{
var context = WebOperationContext.Current;
using (var request = CreateNewRequestFromContext(context))
{
var response = await \_invoker.SendAsync(request, CancellationToken.None).ConfigureAwait(false);
var stream = response.Content != null ? await response.Content.ReadAsStreamAsync().ConfigureAwait(false) : null;
return StreamMessageHelper.CreateMessage(MessageVersion.None, "GETRESPONSE", stream ?? new MemoryStream());
}
}
}
```
Adding `ConfigureAwait(false)` to the 2 places above didn't completely fix my problem because a threadpool thread used to service a wcf request coming into here may already have a `SynchronizationContext`. **In that case the request makes it all the way through this whole `GetAsync` method and returns**. However, it still ends up deadlocked in `System.ServiceModel.Dispatcher.TaskMethodInvoker`, because in that microsoft code, it doesn't use `ConfigureAwait(false)` and I want to assume there is a good reason for that ([for reference](https://github.com/Microsoft/referencesource/blob/master/System.ServiceModel/System/ServiceModel/Dispatcher/TaskMethodInvoker.cs#L223-L229)):
```
var returnValueTask = returnValue as Task;
if (returnValueTask != null)
{
// Only return once the task has completed
await returnValueTask;
}
```
It feels really wrong, but would converting this to using APM (Begin/End) instead of using Tasks fix this? Or, is the only fix to just correct the code that is not cleaning up its `SynchronizationContext` properly?<issue_comment>username_1: Try below; I have found success in similar cases getting into the async rabbit hole.
```
var responsebytes = await response.Content.ReadAsByteArrayAsync();
MemoryStream stream = new MemoryStream(filebytes);
```
Response the stream variable.
Hope it helps.
Upvotes: 2 <issue_comment>username_2: **Update:** we now know we're dealing with a `WindowsFormsSynchronizationContext` (see comments), for whatever reason in a WCF application. It's no surprise then to see deadlocks since the point of that SyncContext is to run all continuations on the same thread.
You could try to to set [WindowsFormsSynchronizationContext.AutoInstall](https://learn.microsoft.com/en-ie/dotnet/api/system.windows.forms.windowsformssynchronizationcontext.autoinstall) to `false`. According to its docs, what it does is:
>
> Gets or sets a value indicating whether the WindowsFormsSynchronizationContext is installed when a control is created
>
>
>
Assuming someone creates a WindowsForms control somewhere in your app, then that might be your issue and would potentially be solved by disabling this setting.
**An alternative** to get rid of an existing `SynchronizationContext` would be to just overwrite it with null, and later restoring it (if you're nice). This [article](https://blogs.msdn.microsoft.com/benwilli/2017/02/09/an-alternative-to-configureawaitfalse-everywhere/) describes this approach and provides a convenient [`SynchronizationContextRemover`](https://github.com/negativeeddy/blog-examples/blob/master/ConfigureAwaitBehavior/ExtremeConfigAwaitLibrary/SynchronizationContextRemover.cs) implementation you could use.
However, this probably won't work if the SyncContext is created by some library methods you use. I'm not aware of a way to prevent a SyncContext from being overwritten, so setting a dummy context won't help either.
---
Are you sure the `SynchronizationContext` is actually at fault here?
From this [MSDN magazine article](https://msdn.microsoft.com/magazine/gg598924.aspx):
>
> **Default (ThreadPool) SynchronizationContext (mscorlib.dll: System.Threading)**
>
> The default SynchronizationContext is a default-constructed SynchronizationContext object. By convention, **if a thread’s current SynchronizationContext is null, then it implicitly has a default SynchronizationContext**.
>
>
> The default SynchronizationContext queues its asynchronous delegates to the ThreadPool but executes its synchronous delegates directly on the calling thread. Therefore, its context covers all ThreadPool threads as well as any thread that calls Send. The context “borrows” threads that call Send, bringing them into its context until the delegate completes. In this sense, the default context may include any thread in the process.
>
>
> **The default SynchronizationContext is applied to ThreadPool threads unless the code is hosted by ASP.NET**. The default SynchronizationContext is also implicitly applied to explicit child threads (instances of the Thread class) unless the child thread sets its own SynchronizationContext.
>
>
>
If the `SynchronizationContext` you are seeing is the default one, it should be fine (or rather, you will have a very hard time to avoid it being used).
Can't you provide more details / code about what's involved?
One thing that looks immediately suspicious to me in your code (though it may be completely fine) is that you have a `using` block that captures a static `WebOperationContext.Current` in `request`, which will both be captured by the generated async state machine. Again, might be fine, but there's a lot of potential for deadlocks here if something waits on `WebOperationContext`
Upvotes: 2
|
2018/03/16
| 1,592 | 4,698 |
<issue_start>username_0: I am making API calls and getting back nested JSON response for every ID.
If I run the API call for one ID the JSON looks like this.
```
u'{"id":26509,"name":"ORD.00001","order_type":"sales","consumer_id":415372,"order_source":"in_store","is_submitted":0,"fulfillment_method":"in_store","order_total":150,"balance_due":150,"tax_total":0,"coupon_total":0,"order_status":"cancelled","payment_complete":null,"created_at":"2017-12-02 19:49:15","updated_at":"2017-12-02 20:07:25","products":[{"id":48479,"item_master_id":239687,"name":"QA_FacewreckHaze","quantity":1,"pricing_weight_id":null,"category_id":1,"subcategory_id":8,"unit_price":"150.00","original_unit_price":"150.00","discount_total":"0.00","created_at":"2017-12-02 19:49:45","sold_weight":10,"sold_weight_uom":"GR"}],"payments":[],"coupons":[],"taxes":[],"order_subtotal":150}'
```
I can successfully parse this one JSON string into a dataframe using this line of code:
```
order_detail = json.loads(r.text)
order_detail = json_normalize(order_detail_staging)
```
I can iterate all my IDs through the API using this code:
```
lists = []
for id in df.id:
r = requests.get("URL/v1/orders/{id}".format(id=id), headers = headers_order)
lists.append(r.text)
```
Now that all my JSON responses are stored in the list. How do I write all the elements into the list into a dataframe?
The code I have been trying is this:
```
for x in lists:
order_detail = json.loads(x)
order_detail = json_normalize(x)
print(order_detail)
```
I get error:
```
AttributeError: 'unicode' object has no attribute 'itervalues'
```
I know this is happening at line:
```
order_detail = json_normalize(x)
```
Why does this line work for a single JSON string but not for the list? What can I do get the list of nested JSON into a dataframe?
Thank you in advance for the help.
edit:
```
Traceback (most recent call last):
File "", line 3, in
for id in df.id
File "/Users/bob/anaconda/lib/python2.7/site-packages/requests/models.py", line 802, in json
return json.loads(self.text, \*\*kwargs)
File "/Users/bob/anaconda/lib/python2.7/json/\_\_init\_\_.py", line 339, in loads
return \_default\_decoder.decode(s)
File "/Users/bob/anaconda/lib/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw\_decode(s, idx=\_w(s, 0).end())
File "/Users/bob/anaconda/lib/python2.7/json/decoder.py", line 382, in raw\_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
Traceback (most recent call last):
File "", line 3, in
for id in df.id
File "/Users/bob/anaconda/lib/python2.7/site-packages/requests/models.py", line 802, in json
return json.loads(self.text, \*\*kwargs)
File "/Users/bob/anaconda/lib/python2.7/json/\_\_init\_\_.py", line 339, in loads
return \_default\_decoder.decode(s)
File "/Users/bob/anaconda/lib/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw\_decode(s, idx=\_w(s, 0).end())
File "/Users/bob/anaconda/lib/python2.7/json/decoder.py", line 382, in raw\_decode
raise ValueError("No JSON object could be decoded")
```<issue_comment>username_1: Try this:
```
In [28]: lst = list(set(order_detail) - set(['products','coupons','payments','taxes']))
In [29]: pd.io.json.json_normalize(order_detail, ['products'], lst, meta_prefix='p_')
Out[29]:
category_id created_at discount_total id item_master_id name original_unit_price pricing_weight_id \
0 1 2017-12-02 19:49:45 0.00 48479 239687 QA_FacewreckHaze 150.00 None
quantity sold_weight ... p_tax_total p_order_source p_consumer_id p_payment_complete p_coupon_total \
0 1 10 ... 0 in_store 415372 None 0
p_fulfillment_method p_order_type p_is_submitted p_balance_due p_updated_at
0 in_store sales 0 150 2017-12-02 20:07:25
[1 rows x 29 columns]
```
Upvotes: 0 <issue_comment>username_2: * use response .json() method
* feed it directly to `json_normalize`
Example:
```
df = json_normalize([
requests.get("URL/v1/orders/{id}".format(id=id), headers = headers_order).json()
for id in df.id
])
```
### UPD:
failsaife version to handle incorrect responses:
```
def gen():
for id in df.id:
try:
yield requests.get("URL/v1/orders/{id}".format(id=id), headers = headers_order).json()
except ValueError: # incorrect API response
pass
df = json_normalize(list(gen()))
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,689 | 5,846 |
<issue_start>username_0: I am trying to install istio. I can easily package the helm chart if I clone the repo from github but I am just wondering if there is a helm chart repo that I can use?<issue_comment>username_1: Yes there is. A quick google search turned this up: <https://github.com/istio/istio/tree/master/install/kubernetes/helm/istio>
Upvotes: 0 <issue_comment>username_2: `helm repo add istio https://istio.io/charts` works. I found it in [this](https://github.com/istio/istio.github.io/pull/2616) PR.
Upvotes: -1 <issue_comment>username_3: It's a pain to find, and they don't really reference it properly in the documentation, but according to [these](https://github.com/istio/istio.io/pull/2842#issue-228469747) [two](https://github.com/istio/istio.io/pull/2842#issuecomment-436137840) comments, the charts can be found in the following locations:
* master: <https://gcsweb.istio.io/gcs/istio-prerelease/daily-build/master-latest-daily/charts/>
* v1.1.x: <https://gcsweb.istio.io/gcs/istio-prerelease/daily-build/release-1.1-latest-daily/charts/>
Upvotes: 0 <issue_comment>username_4: For a more recent answer, you can now add helm repository for istio for a specific version with `helm repo add istio.io https://storage.googleapis.com/istio-release/releases/{{< istio_full_version >}}/charts/` according to documentation [here](https://istio.io/docs/setup/kubernetes/install/helm/#helm-chart-release-repositories).
It seems that `helm repo add istio.io https://storage.googleapis.com/istio-release/releases/charts` work too but for older versions (up to 1.1.2). It is not yet documented but follow a more idiomatic versionning. An issue is open on istio : <https://github.com/istio/istio/issues/15498>
Upvotes: 0 <issue_comment>username_5: If you're looking for a way to install istio version higher than 1.8.0 then there is a good news.
According to [documentation](https://istio.io/latest/news/releases/1.8.x/announcing-1.8/) helm support is back, currently in alpha.
>
> We’ve added support for installing Istio with Helm 3. This includes both in-place upgrades and canary deployment of new control planes, after installing 1.8 or later. Helm 3 support is currently Alpha, so please try it out and give your feedback.
>
>
>
---
There is istio [documentation](https://istio.io/latest/docs/setup/install/helm/) about installing Istio with Helm 3, Helm 2 is not supported for installing Istio.
There are the Prerequisites:
* [Download the Istio release](https://istio.io/latest/docs/setup/getting-started/#download)
* [Perform any necessary platform-specific setup](https://istio.io/latest/docs/setup/platform-setup/)
* [Check the Requirements for Pods and Services](https://istio.io/latest/docs/ops/deployment/requirements/)
* [Install a Helm client with a version higher than 3.1.1](https://helm.sh/docs/intro/install/)
There are the installation steps for istio 1.8.1:
>
> Note that the default chart configuration uses the secure third party tokens for the service account token projections used by Istio proxies to authenticate with the Istio control plane. Before proceeding to install any of the charts below, you should **verify if third party tokens are enabled in your cluster** by following the steps describe [here](https://istio.io/latest/docs/ops/best-practices/security/#configure-third-party-service-account-tokens). **If third party tokens are not enabled, you should add the option --set global.jwtPolicy=first-party-jwt** to the Helm install commands. If the jwtPolicy is not set correctly, pods associated with istiod, gateways or workloads with injected Envoy proxies will not get deployed due to the missing istio-token volume.
>
>
>
1.Download the Istio release and change directory to the root of the release package and then follow the instructions below.
```
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.8.1 sh -
cd istio-1.8.1
```
2.Create a namespace istio-system for Istio components:
```
kubectl create namespace istio-system
```
3.Install the Istio base chart which contains cluster-wide resources used by the Istio control plane:
```
helm install -n istio-system istio-base manifests/charts/base
```
4.Install the Istio discovery chart which deploys the istiod service:
```
helm install --namespace istio-system istiod manifests/charts/istio-control/istio-discovery \
--set global.hub="docker.io/istio" --set global.tag="1.8.1"
```
5.Install the Istio ingress gateway chart which contains the ingress gateway components:
```
helm install --namespace istio-system istio-ingress manifests/charts/gateways/istio-ingress \
--set global.hub="docker.io/istio" --set global.tag="1.8.1"
```
6.(Optional) Install the Istio egress gateway chart which contains the egress gateway components:
```
helm install --namespace istio-system istio-egress manifests/charts/gateways/istio-egress \
--set global.hub="docker.io/istio" --set global.tag="1.8.1"
```
7.Verify that all Kubernetes pods in istio-system namespace are deployed and have a STATUS of Running:
```
kubectl get pods -n istio-system
```
Upvotes: 1 <issue_comment>username_6: The official helm chart is coming now!
<https://artifacthub.io/packages/helm/istio-official/gateway>
Need to be careful [the comment in issue #31275](https://github.com/istio/istio/issues/31275#issuecomment-917136870)
>
> Note: this is a 1.12 prerelease, so you need to pass --devel to all helm commands and should not run it in prod yet.
>
>
>
---
Because the chart is still in the `alpha` version, we need to pass `--devel` flag or specify a chart version to allow development versions.
Install steps:
```
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
helm install --devel istio-ingressgateway istio/gateway
# or --version 1.12.0-alpha.1
```
Upvotes: 0
|
2018/03/16
| 1,980 | 6,536 |
<issue_start>username_0: Using PHP, how do I convert this string:
```
$str = "Lorem ipsum dolor sit amet 00541000004UDewAAG consectetur adipiscing elit 00541000003WKoEAAW eiusmod tempor incididunt 00541000003WKmiAA";
```
Into an array like this:
```
$messageSegments = [
["type" => "Text", "text" => "Lorem ipsum dolor sit amet "],
["type" => "Mention", "id" => "00541000004UDewAAG"],
["type" => "Text", "text" => "consectetur adipiscing elit"],
["type" => "Mention", "id" => "00541000003WKoEAAW"],
["type" => "Text", "text" => "eiusmod tempor incididunt"],
["type" => "Mention", "id" => "00541000003WKmiAA"],
];
```
The type "Mention" always has this format: "00541000003WKoEAAW" which is a [Salesforce ID](https://salesforce.stackexchange.com/questions/1653/what-are-salesforce-ids-composed-of) while everything else is regular text..
Any help is appreciated.<issue_comment>username_1: Yes there is. A quick google search turned this up: <https://github.com/istio/istio/tree/master/install/kubernetes/helm/istio>
Upvotes: 0 <issue_comment>username_2: `helm repo add istio https://istio.io/charts` works. I found it in [this](https://github.com/istio/istio.github.io/pull/2616) PR.
Upvotes: -1 <issue_comment>username_3: It's a pain to find, and they don't really reference it properly in the documentation, but according to [these](https://github.com/istio/istio.io/pull/2842#issue-228469747) [two](https://github.com/istio/istio.io/pull/2842#issuecomment-436137840) comments, the charts can be found in the following locations:
* master: <https://gcsweb.istio.io/gcs/istio-prerelease/daily-build/master-latest-daily/charts/>
* v1.1.x: <https://gcsweb.istio.io/gcs/istio-prerelease/daily-build/release-1.1-latest-daily/charts/>
Upvotes: 0 <issue_comment>username_4: For a more recent answer, you can now add helm repository for istio for a specific version with `helm repo add istio.io https://storage.googleapis.com/istio-release/releases/{{< istio_full_version >}}/charts/` according to documentation [here](https://istio.io/docs/setup/kubernetes/install/helm/#helm-chart-release-repositories).
It seems that `helm repo add istio.io https://storage.googleapis.com/istio-release/releases/charts` work too but for older versions (up to 1.1.2). It is not yet documented but follow a more idiomatic versionning. An issue is open on istio : <https://github.com/istio/istio/issues/15498>
Upvotes: 0 <issue_comment>username_5: If you're looking for a way to install istio version higher than 1.8.0 then there is a good news.
According to [documentation](https://istio.io/latest/news/releases/1.8.x/announcing-1.8/) helm support is back, currently in alpha.
>
> We’ve added support for installing Istio with Helm 3. This includes both in-place upgrades and canary deployment of new control planes, after installing 1.8 or later. Helm 3 support is currently Alpha, so please try it out and give your feedback.
>
>
>
---
There is istio [documentation](https://istio.io/latest/docs/setup/install/helm/) about installing Istio with Helm 3, Helm 2 is not supported for installing Istio.
There are the Prerequisites:
* [Download the Istio release](https://istio.io/latest/docs/setup/getting-started/#download)
* [Perform any necessary platform-specific setup](https://istio.io/latest/docs/setup/platform-setup/)
* [Check the Requirements for Pods and Services](https://istio.io/latest/docs/ops/deployment/requirements/)
* [Install a Helm client with a version higher than 3.1.1](https://helm.sh/docs/intro/install/)
There are the installation steps for istio 1.8.1:
>
> Note that the default chart configuration uses the secure third party tokens for the service account token projections used by Istio proxies to authenticate with the Istio control plane. Before proceeding to install any of the charts below, you should **verify if third party tokens are enabled in your cluster** by following the steps describe [here](https://istio.io/latest/docs/ops/best-practices/security/#configure-third-party-service-account-tokens). **If third party tokens are not enabled, you should add the option --set global.jwtPolicy=first-party-jwt** to the Helm install commands. If the jwtPolicy is not set correctly, pods associated with istiod, gateways or workloads with injected Envoy proxies will not get deployed due to the missing istio-token volume.
>
>
>
1.Download the Istio release and change directory to the root of the release package and then follow the instructions below.
```
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.8.1 sh -
cd istio-1.8.1
```
2.Create a namespace istio-system for Istio components:
```
kubectl create namespace istio-system
```
3.Install the Istio base chart which contains cluster-wide resources used by the Istio control plane:
```
helm install -n istio-system istio-base manifests/charts/base
```
4.Install the Istio discovery chart which deploys the istiod service:
```
helm install --namespace istio-system istiod manifests/charts/istio-control/istio-discovery \
--set global.hub="docker.io/istio" --set global.tag="1.8.1"
```
5.Install the Istio ingress gateway chart which contains the ingress gateway components:
```
helm install --namespace istio-system istio-ingress manifests/charts/gateways/istio-ingress \
--set global.hub="docker.io/istio" --set global.tag="1.8.1"
```
6.(Optional) Install the Istio egress gateway chart which contains the egress gateway components:
```
helm install --namespace istio-system istio-egress manifests/charts/gateways/istio-egress \
--set global.hub="docker.io/istio" --set global.tag="1.8.1"
```
7.Verify that all Kubernetes pods in istio-system namespace are deployed and have a STATUS of Running:
```
kubectl get pods -n istio-system
```
Upvotes: 1 <issue_comment>username_6: The official helm chart is coming now!
<https://artifacthub.io/packages/helm/istio-official/gateway>
Need to be careful [the comment in issue #31275](https://github.com/istio/istio/issues/31275#issuecomment-917136870)
>
> Note: this is a 1.12 prerelease, so you need to pass --devel to all helm commands and should not run it in prod yet.
>
>
>
---
Because the chart is still in the `alpha` version, we need to pass `--devel` flag or specify a chart version to allow development versions.
Install steps:
```
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
helm install --devel istio-ingressgateway istio/gateway
# or --version 1.12.0-alpha.1
```
Upvotes: 0
|
2018/03/16
| 1,881 | 6,324 |
<issue_start>username_0: I don't have idea how can i deleted all completed tasks from my json file and my app view:
```
json:
{
"list": [
{
"name": "Cleaning",
"desc": "by me",
"date": "11-3-2018 13:38",
"active": "false",
"id": 1
},
{
"name": "Wash the dishes",
"desc": "by me",
"date": "11-3-2018 23:11",
"active": "true",
"id": 2
},
{
"name": "Training",
"desc": "go with bro",
"date": "15-1-2016 23:41",
"active": "false",
"id": 3
}
]
}
```
I would like to deleted all tasks - active: false by one click button.
I have to use XMLHttpRequest.<issue_comment>username_1: Yes there is. A quick google search turned this up: <https://github.com/istio/istio/tree/master/install/kubernetes/helm/istio>
Upvotes: 0 <issue_comment>username_2: `helm repo add istio https://istio.io/charts` works. I found it in [this](https://github.com/istio/istio.github.io/pull/2616) PR.
Upvotes: -1 <issue_comment>username_3: It's a pain to find, and they don't really reference it properly in the documentation, but according to [these](https://github.com/istio/istio.io/pull/2842#issue-228469747) [two](https://github.com/istio/istio.io/pull/2842#issuecomment-436137840) comments, the charts can be found in the following locations:
* master: <https://gcsweb.istio.io/gcs/istio-prerelease/daily-build/master-latest-daily/charts/>
* v1.1.x: <https://gcsweb.istio.io/gcs/istio-prerelease/daily-build/release-1.1-latest-daily/charts/>
Upvotes: 0 <issue_comment>username_4: For a more recent answer, you can now add helm repository for istio for a specific version with `helm repo add istio.io https://storage.googleapis.com/istio-release/releases/{{< istio_full_version >}}/charts/` according to documentation [here](https://istio.io/docs/setup/kubernetes/install/helm/#helm-chart-release-repositories).
It seems that `helm repo add istio.io https://storage.googleapis.com/istio-release/releases/charts` work too but for older versions (up to 1.1.2). It is not yet documented but follow a more idiomatic versionning. An issue is open on istio : <https://github.com/istio/istio/issues/15498>
Upvotes: 0 <issue_comment>username_5: If you're looking for a way to install istio version higher than 1.8.0 then there is a good news.
According to [documentation](https://istio.io/latest/news/releases/1.8.x/announcing-1.8/) helm support is back, currently in alpha.
>
> We’ve added support for installing Istio with Helm 3. This includes both in-place upgrades and canary deployment of new control planes, after installing 1.8 or later. Helm 3 support is currently Alpha, so please try it out and give your feedback.
>
>
>
---
There is istio [documentation](https://istio.io/latest/docs/setup/install/helm/) about installing Istio with Helm 3, Helm 2 is not supported for installing Istio.
There are the Prerequisites:
* [Download the Istio release](https://istio.io/latest/docs/setup/getting-started/#download)
* [Perform any necessary platform-specific setup](https://istio.io/latest/docs/setup/platform-setup/)
* [Check the Requirements for Pods and Services](https://istio.io/latest/docs/ops/deployment/requirements/)
* [Install a Helm client with a version higher than 3.1.1](https://helm.sh/docs/intro/install/)
There are the installation steps for istio 1.8.1:
>
> Note that the default chart configuration uses the secure third party tokens for the service account token projections used by Istio proxies to authenticate with the Istio control plane. Before proceeding to install any of the charts below, you should **verify if third party tokens are enabled in your cluster** by following the steps describe [here](https://istio.io/latest/docs/ops/best-practices/security/#configure-third-party-service-account-tokens). **If third party tokens are not enabled, you should add the option --set global.jwtPolicy=first-party-jwt** to the Helm install commands. If the jwtPolicy is not set correctly, pods associated with istiod, gateways or workloads with injected Envoy proxies will not get deployed due to the missing istio-token volume.
>
>
>
1.Download the Istio release and change directory to the root of the release package and then follow the instructions below.
```
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.8.1 sh -
cd istio-1.8.1
```
2.Create a namespace istio-system for Istio components:
```
kubectl create namespace istio-system
```
3.Install the Istio base chart which contains cluster-wide resources used by the Istio control plane:
```
helm install -n istio-system istio-base manifests/charts/base
```
4.Install the Istio discovery chart which deploys the istiod service:
```
helm install --namespace istio-system istiod manifests/charts/istio-control/istio-discovery \
--set global.hub="docker.io/istio" --set global.tag="1.8.1"
```
5.Install the Istio ingress gateway chart which contains the ingress gateway components:
```
helm install --namespace istio-system istio-ingress manifests/charts/gateways/istio-ingress \
--set global.hub="docker.io/istio" --set global.tag="1.8.1"
```
6.(Optional) Install the Istio egress gateway chart which contains the egress gateway components:
```
helm install --namespace istio-system istio-egress manifests/charts/gateways/istio-egress \
--set global.hub="docker.io/istio" --set global.tag="1.8.1"
```
7.Verify that all Kubernetes pods in istio-system namespace are deployed and have a STATUS of Running:
```
kubectl get pods -n istio-system
```
Upvotes: 1 <issue_comment>username_6: The official helm chart is coming now!
<https://artifacthub.io/packages/helm/istio-official/gateway>
Need to be careful [the comment in issue #31275](https://github.com/istio/istio/issues/31275#issuecomment-917136870)
>
> Note: this is a 1.12 prerelease, so you need to pass --devel to all helm commands and should not run it in prod yet.
>
>
>
---
Because the chart is still in the `alpha` version, we need to pass `--devel` flag or specify a chart version to allow development versions.
Install steps:
```
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
helm install --devel istio-ingressgateway istio/gateway
# or --version 1.12.0-alpha.1
```
Upvotes: 0
|
2018/03/16
| 923 | 3,472 |
<issue_start>username_0: I have a React Native, React hybrid app. For React Native i am using react-native-elements.
My app is run using Expo and was built out with the react-native init. I am getting the Material Icons (missing) RSD;
[](https://i.stack.imgur.com/DcXLX.png)
Through much searching, i have found the @expo/vector-icons but it doesn't seem to work. My App.js looks like this;
```
import React from 'react'
import { Font, AppLoading } from 'expo'
import { MaterialIcons } from '@expo/vector-icons'
import HybridApp from './src/App'
export default class NativeApp extends React.Component {
constructor() {
super()
this.state = {
fontsAreLoaded: false
}
}
async componentWillMount() {
await Font.loadAsync(MaterialIcons.font)
this.setState({ fontsAreLoaded: true })
}
render() {
const { fontsAreLoaded } = this.state
return !fontsAreLoaded ? :
}
}
```
As you can see, i am waiting for the font to load... all to no avail.<issue_comment>username_1: After hours wracking my brain on this, the answer was there in front of me the whole time.
Presumably, React Native Elements refers to Material icons as `Material Icons`, *not* `MaterialIcons`.
This means that the default import from `@expo/vector-icons` does not work as their reference to Material icons is different.
The solution is to manually select Material icons from expo, replacing this line;
```
await Font.loadAsync(MaterialIcons.font)
```
with
```
await Font.loadAsync({
'Material Icons': require('@expo/vector-icons/fonts/MaterialIcons.ttf')
})
```
I hope this saves someone some time in the future.
Upvotes: 5 [selected_answer]<issue_comment>username_2: The icons are actually fonts and must first be loaded. It seems they are autoloaded sometimes and others times are not.
So to ensure they are loaded, do this:
```
import FontAwesome from './node_modules/@expo/vector-icons/fonts/FontAwesome.ttf';
import MaterialIcons from './node_modules/@expo/vector-icons/fonts/MaterialIcons.ttf';
...
async componentWillMount() {
try {
await Font.loadAsync({
FontAwesome,
MaterialIcons
});
this.setState({ fontLoaded: true });
} catch (error) {
console.log('error loading icon fonts', error);
}
}
...
render() {
if (!this.state.fontLoaded) {
return ;
}
```
Then when you reference the type, it must be the same type that the component you are using is expecting.
For example, react native elements expects these types: material-community, font-awesome, octicon, ionicon, foundation, evilicon, simple-line-icon, zocial, or entypo
See complete answer here:
<http://javascriptrambling.blogspot.com/2018/03/expo-icon-fonts-with-react-native-and.html>
Upvotes: 0 <issue_comment>username_3: This question is old, but what worked for me and quite straightforward is
```
import { Ionicons } from "@expo/vector-icons";
await Font.loadAsync({...Ionicons.font, ...other imports })
```
Upvotes: 0 <issue_comment>username_4: Check if you have any dependency warnings when you run the app. I had an expo-font version warning, when I fixed it this error went away.
```
Some dependencies are incompatible with the installed expo package version:
- expo-font - expected version range: ~8.4.0 - actual version installed: ^9.1.0
```
Upvotes: 0
|
2018/03/16
| 894 | 3,296 |
<issue_start>username_0: I am in school and got an assignment to write a C program that takes an input from a user then scans a file and returns how many times that word shows up in a file. I feel like I got it 90% done, but for some reason I can't get the while loop. When I run the program it crashes at the while loop. Any help or guidance would be greatly appreciated.
```
#include
#include
#include
#include
int main() {
char input[50], file[50], word[50];
int wordcount;
printf("Enter a string to search for\n");
scanf("%s", input);
printf("Enter a file location to open\n");
scanf("%s", file);
FILE \* fp;
fp = fopen("%s", "r", file);
while (fscanf(fp, "%s", word) != EOF) {
if (strcmp(word, input)) {
printf("found the word %s\n", input);
wordcount++;
}
}
printf("The world %s shows up %d times\n", input, wordcount);
system("pause");
}
```<issue_comment>username_1: After hours wracking my brain on this, the answer was there in front of me the whole time.
Presumably, React Native Elements refers to Material icons as `Material Icons`, *not* `MaterialIcons`.
This means that the default import from `@expo/vector-icons` does not work as their reference to Material icons is different.
The solution is to manually select Material icons from expo, replacing this line;
```
await Font.loadAsync(MaterialIcons.font)
```
with
```
await Font.loadAsync({
'Material Icons': require('@expo/vector-icons/fonts/MaterialIcons.ttf')
})
```
I hope this saves someone some time in the future.
Upvotes: 5 [selected_answer]<issue_comment>username_2: The icons are actually fonts and must first be loaded. It seems they are autoloaded sometimes and others times are not.
So to ensure they are loaded, do this:
```
import FontAwesome from './node_modules/@expo/vector-icons/fonts/FontAwesome.ttf';
import MaterialIcons from './node_modules/@expo/vector-icons/fonts/MaterialIcons.ttf';
...
async componentWillMount() {
try {
await Font.loadAsync({
FontAwesome,
MaterialIcons
});
this.setState({ fontLoaded: true });
} catch (error) {
console.log('error loading icon fonts', error);
}
}
...
render() {
if (!this.state.fontLoaded) {
return ;
}
```
Then when you reference the type, it must be the same type that the component you are using is expecting.
For example, react native elements expects these types: material-community, font-awesome, octicon, ionicon, foundation, evilicon, simple-line-icon, zocial, or entypo
See complete answer here:
<http://javascriptrambling.blogspot.com/2018/03/expo-icon-fonts-with-react-native-and.html>
Upvotes: 0 <issue_comment>username_3: This question is old, but what worked for me and quite straightforward is
```
import { Ionicons } from "@expo/vector-icons";
await Font.loadAsync({...Ionicons.font, ...other imports })
```
Upvotes: 0 <issue_comment>username_4: Check if you have any dependency warnings when you run the app. I had an expo-font version warning, when I fixed it this error went away.
```
Some dependencies are incompatible with the installed expo package version:
- expo-font - expected version range: ~8.4.0 - actual version installed: ^9.1.0
```
Upvotes: 0
|
2018/03/16
| 1,627 | 5,262 |
<issue_start>username_0: We are trying to use Google OAuth in our product. The flow would be to get Client get the auth from the users and send the token to server. On server side I need to verify the token is valid. For now, I am using OAuth2Sample provided by Google as client. when I verify the sent token on server side, I am getting the following exception:
>
> com.google.api.client.auth.oauth2.TokenResponseException: 400 Bad Request
>
>
> {
>
> "error" : "invalid\_grant",
>
> "error\_description" : "Malformed auth code."
>
> }
>
>
> at com.google.api.client.auth.oauth2.TokenResponseException.from(TokenResponseException.java:105)
>
>
> at com.google.api.client.auth.oauth2.TokenRequest.executeUnparsed(TokenRequest.java:287)
> at com.google.api.client.googleapis.auth.oauth2.GoogleAuthorizationCodeTokenRequest.execute(GoogleAuthorizationCodeTokenRequest.java:158)
>
>
>
Here is the code on the server side:
```
GoogleTokenResponse tokenResponse =
new GoogleAuthorizationCodeTokenRequest(
new NetHttpTransport(),
JacksonFactory.getDefaultInstance(),
"https://www.googleapis.com/oauth2/v4/token",
CLIENT_ID,
CLIENT_SECRET,
authToken, //Sent from the client
"") // specify an empty string if you do not have redirect URL
.execute();
```
Here is how I get the accesstoken on the client side:
```
private static final List SCOPES = Arrays.asList(
"https://www.googleapis.com/auth/userinfo.profile",
"https://www.googleapis.com/auth/userinfo.email");
//...
GoogleAuthorizationCodeFlow flow = new GoogleAuthorizationCodeFlow.Builder(
httpTransport, JSON\_FACTORY,
clientSecrets, //Client ID and Client Secret
SCOPES).setDataStoreFactory(
dataStoreFactory).build();
LocalServerReceiver lsr = new LocalServerReceiver();
Credential cr = new AuthorizationCodeInstalledApp(flow, lsr).authorize("user");
return cr.getAccessToken(); //send this to server for verification
```
The token is not corrupted on the way to server and it is:
>
> ya29.Glx\_BUjV\_zIiDzq0oYMqo<KEY>
>
>
>
If I try to access profile and email from the client side, it works fine. Same token does not work on the server side gets malformed token exception.<issue_comment>username_1: I am using `Node.js` googleapis client library, Here is my case:
The authorization code in the url hash fragment is encoded by `encodeURIComponent` api, so if you pass this code to request access token. It will throw an error:
`{ "error": "invalid_grant", "error_description": "Malformed auth code." }`
So I use `decodeURIComponent` to decode the authorization code.
```
decodeURIComponent('4%2F_QCXwy-PG5Ub_JTiL7ULaCVb6K-Jsv45c7TPqPsG2-sCPYMTseEtqHWcU_ynqWQJB3Vuw5Ad1etoWqNPBaGvGHY')
```
After decode, the authorization code is:
```
"4/_QCXwy-PG5Ub_JTiL7ULaCVb6K-Jsv45c7TPqPsG2-sCPYMTseEtqHWcU_ynqWQJB3Vuw5Ad1etoWqNPBaGvGHY"
```
In Java, maybe you can use `URLDecoder.decode` handle the code.
Upvotes: 5 <issue_comment>username_2: Thanks to slideshowp2's answer, and <NAME>'s comment.
I am using server-side web app with HTTP/REST ,also facing the same problem
and yes, the reason is that authorization code return from URL is encoded.
After decode, everything work fine to get access token.
p.s. here is some info about [encodedURL](https://en.wikipedia.org/wiki/Percent-encoding)
since our Content-Type: application/x-www-form-urlencoded
Upvotes: 0 <issue_comment>username_3: For anyone who might face this in the future. I faced this issue and `decodeURIComponent` **did** **not** work with me. The previous answers work with for different issue.
From the question itself, you can see that the token starts with `ya29.`
```
<KEY>
```
That indicates that the token is an online token. In case of the online login, you can see that the response looks like this [](https://i.stack.imgur.com/GjLau.png)
But that will not work with server side login. So, when using some Google client library, please note that there are two variables that you need to check:
* access\_type: `offline`
* responseType: `code`
Once you configure Google client library with those fields, you can see that the response of the login will change to something like this
```
{
"code": "4/0AX4XfWgaJJc3bsUYZugm5-Y5lPu3muSfUqCrpY5KZoGEGAHuw0jrkg_xkD_RX-6bNUq-vA"
}
```
Then you can send that code to the backend and it will work.
Upvotes: 2 <issue_comment>username_4: we do get this error when for some reason the call back url of the auth is called a second time. The first time it works but the second time it errors out.
Not sure how the users are able to do that. Maybe by pressing the back button in the browser.
The error is indeed this:
```
{"error": "invalid_grant","error_description": "Malformed auth code."}
```
Encoding the code in the call back URL was not the problem.
Upvotes: 0
|
2018/03/16
| 282 | 928 |
<issue_start>username_0: I am having issues with weird gaps between divs.
Divs are collapsable and it appears it is causing an extra couple of pixels between them. It's not margin, just a gap.
Here is a sample of my code. The left side "GameName" menu item is clickable.
I am creating the collapse effect by just adding and removing display style on click:
```
if (panel.style.display === "block") {
panel.style.display = "none";
} else {
panel.style.display = "block";
}
```
<https://codepen.io/anon/pen/eMzyMp><issue_comment>username_1: Your gap is actually a margin of element, added by the browser. Try this:
```
ul {
margin: 0;
}
```
Upvotes: 1 <issue_comment>username_2: The weird gap comes from a bottom margin on a child (), added by default browser styles on s.
```
.locations ul {margin-bottom: 0}
```
[fixes it](https://codepen.io/andrei-gheorghiu/pen/PRzExx).
Upvotes: 1 [selected_answer]
|
2018/03/16
| 1,174 | 3,855 |
<issue_start>username_0: I'm using also `ReduxLazyScroll` for infinite scroll effect and it worked but before it was just `ul -> li` list. Now I want my app to generate 3 cards one beside another in bootstrap-grid style, like so:
```
.col-3
.col-3
.col-3
```
And I have some logic problem, which I can't wrap my head around.
So there is the code of parent component ( `BeerListingScroll` ) :
```
import React, { Component } from 'react';
import { Container, Row, Col } from 'reactstrap';
import ReduxLazyScroll from 'redux-lazy-scroll';
import { connect } from 'react-redux';
import { bindActionCreators } from 'redux';
import { fetchBeers } from '../../actions/';
import BeersListItem from '../../components/BeersListItem';
import ProgressIndicator from '../../components/ProgressIndicator';
class BeerListingScroll extends Component {
constructor(props) {
super(props);
this.loadBeers = this.loadBeers.bind(this);
}
loadBeers() {
const { skip, limit } = this.props.beers;
this.props.fetchBeers(skip, limit);
}
render() {
const { beersArray, isFetching, errorMessage, hasMore } = this.props.beers;
return (
{beersArray.map(beer => (
))}
);
}
}
function mapStateToProps(state) {
return {
beers: state.beers,
};
}
function mapDispatchToProps(dispatch) {
return bindActionCreators({ fetchBeers }, dispatch);
}
export default connect(mapStateToProps, mapDispatchToProps)(BeerListingScroll);
```
So I'm using beersArray.map helper and I know that that logic **doesn't make sense** inside `map`:
```
```
because I'm loading each `beer` element 3 times and then it looks like that ->
img
[](https://i.stack.imgur.com/3W97D.png)
---
The question is: **How should I refactor that code to get first, second and third array element in the first row, 4th, 5th, 6th in the second row etc?**
---
Desired outcome:
[](https://i.stack.imgur.com/l0ptb.png)
And here you have child component ( `BeerListItem` )
```
import React from 'react';
import {
Card,
CardImg,
CardBody,
CardTitle,
CardSubtitle,
Button,
} from 'reactstrap';
const BeerListItem = ({ beer }) => (
{beer.name}
{beer.tagline}
More details
);
export default BeerListItem;
```
And full project on github - > [Link to github](https://github.com/elminsterrr/beer-app-q)<issue_comment>username_1: If you want to do this you could group them together in 3's in a nested array, so instead of `[beer1,beer2,beer3,beer4,...]` you have `[[beer1,beer2,beer3],[beer4,..,..],...]` then you can iterate them in groups, with a row per beer group and a column per beer in that group:
```
beerGroupsArray.map(beerGroup => (
{
beerGroup.map(beer => {
})
}
))}
```
That's just one example of how to do it though, based on your data and use case there might be a cleaner, more robust method.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can slice beers array and then iterate through slices.
By that, you will have new Row for each slice (consist 3 different beer item)
```
const numberOfRows = ceil(beersArray.length / 3)
Array(numberOfRows).fill().map((_, rowIndex) => (
{
beersArray.slice(rowIndex \* 3, (rowIndex \*3) + 3).map(beer => (
))}
))
```
Upvotes: 1 <issue_comment>username_3: This is the optimized answer based on @muhammet-enginar 's answer!
```
{Array(parseInt(filteredFarms.length / 1))
.fill()
.map((_, colIndex) => {
return (
<>
{filteredFarms
?.slice(colIndex \* 4, colIndex \* 4 + 4)
?.map((filter, key) => {
return (
<>
- {filter.farm}
);
})}
);
});}
```
It contains return because it's reactjs/nextjs code, I also used parseInt because of the decimal which may cause issues if items are even.
Upvotes: 0
|
2018/03/16
| 251 | 998 |
<issue_start>username_0: I am using the angular4, my code below is trying to display the information basesd on the state variable.
```
something
```
I am using '||' to represent the 'or' statement.
now the state variable = 'canceled', usually it should not display 'something', however in the test 'something' text is still there.
if i used the statement below, it works that the 'something' text disappear.
```
something
```
if i used the statement, it still doesn't work so it doesn't seem to be the ordering issue.
```
something
```
anything wrong?<issue_comment>username_1: **ngIf** must be included inside the div tag like any regular HTML attribute.
Upvotes: 0 <issue_comment>username_2: I think the only issue here is the closing of the opening tag before your ngIf logic.
Upvotes: 1 <issue_comment>username_3: You have a logic error. if the variable is equal to cancel also is different to open, so the result of the conditional will be true.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 359 | 1,459 |
<issue_start>username_0: I have a requirement in Logic Apps where I need to do HTTP GET from a website URL which gives a file which I need to download to Azure File Storage.
I am able to call the downloadable URL but not sure how to go about downloading the file to Azure File storage directory.
Please let me know your inputs.Do I need to write an Azure function or can I get the HTTP action to do the trick to download the file?
Thanks,
SP<issue_comment>username_1: 1) You need to **create one web api function or azure funtion which return file content** like i tried for zip file
2) You need to **call that method using HTTP connector**
3) You can use **"azure File storage"** connector **"create file"** action
in that you need to pass file and **file content which return from your GET API Url**
if you need more help feel free to ask
Upvotes: 0 <issue_comment>username_2: I suppose Logic apps has moved on a little since you first asked this question.
The short answer is yes you can do this solely within Logic Apps.
I'm assuming you're making a HTTP Request at some point and the downloadable file is being returned as a content type of application/octet
Use a 'Blob Storage'->Create Blob action, the only thing I needed to do was to use the binary function as the content in this action
e.g. binary(body('HTTP'))
This caused my zip file to be created in the Azure storage account as a blob.
Ping me if you need more info.
Upvotes: 2
|
2018/03/16
| 593 | 2,307 |
<issue_start>username_0: My situation is that I need to query from another database and show the result in my application. Both databases are on the same server. I came up with an idea on creating SQL-View in my database which would query the other database for values that I want. But I am not quite sure on how I can create or map SQL-View from the ABP framework?
I am using the full .Net framework with the Angular template.<issue_comment>username_1: Create your table view in the database. Then using EF FluentAPI to config the view mapping. Here is the sample code:
**1. Create POCO class to map:**
```
public class YourView
{
public int Id { get; set; }
public string Value { get; set; }
}
```
**2. EF FluentAPI mapping configuration:**
Create map class:
```
public class YourViewMap : IEntityTypeConfiguration
{
public void Configure(EntityTypeBuilder builder)
{
builder.ToTable("YourViewName");
}
}
```
Add mapping configuration to your DbContext (such as AbpCoreDbContext). Override OnModelCreating method:
```
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.ApplyConfiguration(new YourViewMap ());
base.OnModelCreating(modelBuilder);
}
```
**3. Get data**:
using `IRepository` to query data from the view.
P.S: related question [how to use views in code first entity framework](https://stackoverflow.com/questions/7461265/how-to-use-views-in-code-first-entity-framework)
Upvotes: 1 <issue_comment>username_2: Creating View is not directly supported in EF. So you can try the below approach.
* Create an empty Migration using `Add-Migration`.
* Write you Create View script in `Up` method of generated migration
and run the script using `context.Database.ExecuteSqlCommand` method.
* Declare your class and use `Table` as you do for your model class.
`[Table("YourViewName")]`
`public class YourClassName`
`{`
`}`
* Ignore your view class like this
`modelBuilder.Ignore();` in `OnModelCreating` method.
* Run `Update-Database` in Package Manager Console.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Create a new `DbContext` for your legacy database. You can have multiple `DbContext` in an ABP application. Each `DbContext` have its own connection string. Creating view is kinda hackish.
Upvotes: 0
|
2018/03/16
| 642 | 2,383 |
<issue_start>username_0: For some reason in my program, the + sign adds two digits together, in my code:
```
numerator1 += wholenumber1 * denominator1;
```
If `wholenumber1` is `1` and `denominator1` is `4`, then the `numerator1` is `14`... I found this out by:
```
console.log(numerator1);
```
This is using inputs with `type="number"`, and the other parts of the equation are working just fine... But this part is essential in order for my program to run properly, and help is greatly appreciated!<issue_comment>username_1: Create your table view in the database. Then using EF FluentAPI to config the view mapping. Here is the sample code:
**1. Create POCO class to map:**
```
public class YourView
{
public int Id { get; set; }
public string Value { get; set; }
}
```
**2. EF FluentAPI mapping configuration:**
Create map class:
```
public class YourViewMap : IEntityTypeConfiguration
{
public void Configure(EntityTypeBuilder builder)
{
builder.ToTable("YourViewName");
}
}
```
Add mapping configuration to your DbContext (such as AbpCoreDbContext). Override OnModelCreating method:
```
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.ApplyConfiguration(new YourViewMap ());
base.OnModelCreating(modelBuilder);
}
```
**3. Get data**:
using `IRepository` to query data from the view.
P.S: related question [how to use views in code first entity framework](https://stackoverflow.com/questions/7461265/how-to-use-views-in-code-first-entity-framework)
Upvotes: 1 <issue_comment>username_2: Creating View is not directly supported in EF. So you can try the below approach.
* Create an empty Migration using `Add-Migration`.
* Write you Create View script in `Up` method of generated migration
and run the script using `context.Database.ExecuteSqlCommand` method.
* Declare your class and use `Table` as you do for your model class.
`[Table("YourViewName")]`
`public class YourClassName`
`{`
`}`
* Ignore your view class like this
`modelBuilder.Ignore();` in `OnModelCreating` method.
* Run `Update-Database` in Package Manager Console.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Create a new `DbContext` for your legacy database. You can have multiple `DbContext` in an ABP application. Each `DbContext` have its own connection string. Creating view is kinda hackish.
Upvotes: 0
|
2018/03/16
| 2,096 | 7,499 |
<issue_start>username_0: I'd like to use Micrometer to record the execution time of an async method when it eventually happens. Is there a recommended way to do this?
Example: Kafka Replying Template. I want to record the time it takes to actually execute the sendAndReceive call (sends a message on a request topic and receives a response on a reply topic).
```
public Mono sendRequest(Mono request) {
return request
.map(r -> new ProducerRecord(requestsTopic, r))
.map(pr -> {
pr.headers()
.add(new RecordHeader(KafkaHeaders.REPLY\_TOPIC,
"reply-topic".getBytes()));
return pr;
})
.map(pr -> replyingKafkaTemplate.sendAndReceive(pr))
... // further maps, filters, etc.
```
Something like
```
responseGenerationTimer.record(() -> replyingKafkaTemplate.sendAndReceive(pr)))
```
won't work here; it just records the time that it takes to create the `Supplier`, not the actual execution time.<issue_comment>username_1: It looks like `recordCallable` as suggested by <NAME> is the answer. I wrote a quick test to verify this:
```
import io.micrometer.core.instrument.Timer;
import reactor.core.publisher.Mono;
public class Capitalizer {
private final Timer timer;
public Capitalizer(Timer timer) {
this.timer = timer;
}
public Mono capitalize(Mono val) {
return val.flatMap(v -> {
try {
return timer.recordCallable(() -> toUpperCase(v));
} catch (Exception e) {
e.printStackTrace();
return null;
}
}).filter(r -> r != null);
}
private Mono toUpperCase(String val) throws InterruptedException {
Thread.sleep(1000);
return Mono.just(val.toUpperCase());
}
}
```
and to test this:
```
import io.micrometer.core.instrument.Timer;
import io.micrometer.core.instrument.simple.SimpleMeterRegistry;
import org.junit.Before;
import org.junit.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.publisher.Mono;
import java.util.concurrent.TimeUnit;
import static junit.framework.TestCase.assertTrue;
import static org.junit.Assert.assertEquals;
public class CapitalizerTest {
private static final Logger logger =
LoggerFactory.getLogger(CapitalizerTest.class);
private Capitalizer capitalizer;
private Timer timer;
@Before
public void setUp() {
timer = new SimpleMeterRegistry().timer("test");
capitalizer = new Capitalizer(timer);
}
@Test
public void testCapitalize() {
String val = "Foo";
Mono inputMono = Mono.just(val);
Mono mono = capitalizer.capitalize(inputMono);
mono.subscribe(v -> logger.info("Capitalized {} to {}", val, v));
assertEquals(1, timer.count());
logger.info("Timer executed in {} ms",
timer.totalTime(TimeUnit.MILLISECONDS));
assertTrue(timer.totalTime(TimeUnit.MILLISECONDS) > 1000);
}
}
```
The timer reports that the execution time is roughly 1004ms with the 1000ms delay, and 4ms without it.
Upvotes: -1 <issue_comment>username_2: You could do something like the following:
```
// Mono mono = ...
Timer.Sample sample = Timer.start(Clock.SYSTEM); // or use clock of registry
return mono.doOnNext(x -> sample.stop(timer));
```
See here for Sample documentation: <http://micrometer.io/docs/concepts#_storing_start_state_in_code_timer_sample_code>
For a nicer approach you could also have a look at resilience4j they decorate the mono via transform: <https://github.com/resilience4j/resilience4j/tree/master/resilience4j-reactor>
Upvotes: 1 <issue_comment>username_3: You could use `reactor.util.context.Context`
```
import io.micrometer.core.instrument.Clock;
import io.micrometer.core.instrument.Timer;
import io.micrometer.core.instrument.simple.SimpleMeterRegistry;
import org.awaitility.Awaitility;
import org.junit.Assert;
import org.junit.Test;
import org.reactivestreams.Publisher;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.publisher.Mono;
import reactor.core.scheduler.Schedulers;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.function.Function;
import static org.hamcrest.Matchers.is;
public class TestMonoTimer {
private static final Logger LOG = LoggerFactory.getLogger(TestMonoTimer.class);
private static final String TIMER_SAMPLE = "TIMER_SAMPLE";
private static final Timer TIMER = new SimpleMeterRegistry().timer("test");
private static final AtomicBoolean EXECUTION_FLAG = new AtomicBoolean();
@Test
public void testMonoTimer() {
Mono.fromCallable(() -> {
Thread.sleep(1234);
return true;
}).transform(timerTransformer(TIMER))
.subscribeOn(Schedulers.parallel())
.subscribe(EXECUTION_FLAG::set);
Awaitility.await().atMost(2, TimeUnit.SECONDS).untilAtomic(EXECUTION_FLAG, is(true));
Assert.assertTrue(TIMER.totalTime(TimeUnit.SECONDS) > 1);
}
private static Function, Publisher> timerTransformer(Timer timer) {
return mono -> mono
.flatMap(t -> Mono.subscriberContext()
.flatMap(context -> Mono.just(context.get(TIMER\_SAMPLE).stop(timer))
.doOnNext(duration -> LOG.info("Execution time is [{}] seconds",
duration / 1000000000D))
.map(ignored -> t)))
.subscriberContext(context -> context.put(TIMER\_SAMPLE, Timer.start(Clock.SYSTEM)));
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_4: I used the following:
```
private Publisher time(String metricName, Flux publisher) {
return Flux.defer(() -> {
long before = System.currentTimeMillis();
return publisher.doOnNext(next -> Metrics.timer(metricName)
.record(System.currentTimeMillis() - before, TimeUnit.MILLISECONDS));
});
}
```
So to use it in practice:
```
Flux.just(someValue)
.flatMap(val -> time("myMetricName", aTaskThatNeedsTimed(val))
.subscribe(val -> {})
```
Upvotes: 0 <issue_comment>username_5: You can just metrics() from Mono/Flux() (have a look at metrics() here: <https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html>)
then you can do something like
```
public Mono sendRequest(Mono request) {
return request
.map(r -> new ProducerRecord(requestsTopic, r))
.map(pr -> {
pr.headers()
.add(new RecordHeader(KafkaHeaders.REPLY\_TOPIC,
"reply-topic".getBytes()));
return pr;
})
.map(pr -> replyingKafkaTemplate.sendAndReceive(pr)).name("my-metricsname").metrics()
```
And e.g. in graphite you will see latency for this call measured (You can see more here: [How to use Micrometer timer together with webflux endpoints](https://stackoverflow.com/questions/52932685/how-to-use-micrometer-timer-together-with-webflux-endpoints/56075535#56075535))
Upvotes: 3 <issue_comment>username_6: you can make use of `metrics()` ,method that calculates the time interval b/w `subscribe()` and `onComplete()`. you can do like,
```
.metrics().elapsed().doOnNext(tuple -> log.info("get response time: " + tuple.getT1() + "ms")).map(Tuple2::getT2);
```
Upvotes: 0 <issue_comment>username_7: If you consider use `metrics()`, please do understand it won't create a new Meter even if you invoke `Mono.name()`.
Dependning on your situtaion, you have three choice.
1. Using `metrics()`
* Well, If you consider use `metrics()`, please do understand it won't create a new Meter even if you invoke `Mono.name()`.
2. Record the time in `doOnNext` and do your time calculation.
3. Use subscriptionContext as imposed by [username_3](https://stackoverflow.com/a/51984984/2409400)
Personally, I'd like to use approach **3**.
Upvotes: 0
|
2018/03/16
| 595 | 1,849 |
<issue_start>username_0: Any idea how to obtain the source code for Kotlin's stdlib?
In the screencap below, I don't have any option to download the source code like other maven libraries.
I'm using the following Kotlin dependency:
```
org.jetbrains.kotlin
kotlin-stdlib-jdk8
1.2.30
```
[](https://i.stack.imgur.com/O8RmE.png)<issue_comment>username_1: For me it helps to change maven dependency on Kotlin in pom.xml from
```
org.jetbrains.kotlin
kotlin-stdlib-jdk8
1.2.30
```
to
```
org.jetbrains.kotlin
kotlin-stdlib
1.3.31
```
more info on adding Kotlin to project as maven dependency can be found here:
<https://kotlinlang.org/docs/reference/using-maven.html>
Upvotes: 2 <issue_comment>username_2: Solved this by manually importing source code in intelliJ-idea.
Get source file from [github-release](https://github.com/JetBrains/kotlin/releases)
I selected [1.6.21](https://github.com/JetBrains/kotlin/archive/refs/tags/v1.6.21.zip) and added at following macos path(or find .m2 maven folder at respective OS)
```
/Users/${USER}/.m2/repository/org/jetbrains/kotlin/kotlin-stdlib/
e.g.
/Users/${USER}/.m2/repository/org/jetbrains/kotlin/kotlin-stdlib/1.6.21
```
When IDE prompts to download a source or attach a source, similar to following screen

Verify by checking the implementation of take for String class i.e. `"abcd".take(3)`.
when you go to take method's implementation before IDE decompiles String class and points to `StringsKt.java` but after source is attached it points to `_Strings.kt`
And it looks as
[](https://i.stack.imgur.com/KBT8B.png)
Upvotes: 0
|
2018/03/16
| 1,371 | 4,885 |
<issue_start>username_0: My intersect method keeps returning true and setting my "score text" to intersect detected when my images do not collide, instead it changes the text as soon as the emulator starts. Please notify me if more code is needed and check back within 10 minutes. Thanks! -this is all my code and collision code starts at line 163, something is wrong with my collision code because collision isn't being detected, what should I do to fix my collision code.
here is my code:
```
public class MainActivity extends AppCompatActivity {
//Layout
private RelativeLayout myLayout = null;
//Screen Size
private int screenWidth;
private int screenHeight;
//Position
private float ballDownY;
private float ballDownX;
//Initialize Class
private Handler handler = new Handler();
private Timer timer = new Timer();
//Images
private ImageView net = null;
private ImageView ball = null;
//for net movement along x-axis
float x;
float y;
//points
private int points = 0;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
myLayout = (RelativeLayout) findViewById(R.id.myLayout);
//score
TextView score = (TextView) findViewById(R.id.score);
//imageviews
net = (ImageView) findViewById(R.id.net);
ball = (ImageView) findViewById(R.id.ball);
//retrieving screen size
WindowManager wm = getWindowManager();
Display disp = wm.getDefaultDisplay();
Point size = new Point();
disp.getSize(size);
screenWidth = size.x;
screenHeight = size.y;
//move to out of screen
ball.setX(-80.0f);
ball.setY(screenHeight + 80.0f);
//start timer
timer.schedule(new TimerTask() {
@Override
public void run() {
handler.post(new Runnable() {
@Override
public void run() {
changePos();
}
});
}
}, 0, 20);
}
public void changePos() {
//down
ballDownY += 10;
if (ball.getY() > screenHeight) {
ballDownX = (float) Math.floor((Math.random() * (screenWidth -
ball.getWidth())));
ballDownY = -100.0f;
}
ball.setY(ballDownY);
ball.setX(ballDownX);
/*INTERSECT METHOD
Rect rc1 = new Rect();
net.getDrawingRect(rc1);
Rect rc2 = new Rect();
ball.getDrawingRect(rc2);
if (Rect.intersects(rc1, rc2)) {
TextView score = (TextView) findViewById(R.id.score);
score.setText("INTERSECT DETECTED");
}*/
//make net follow finger
myLayout.setOnTouchListener(new View.OnTouchListener() {
@Override
public boolean onTouch(View view, MotionEvent event) {
MainActivity.this.x = event.getX();
y = event.getY();
if (event.getAction() == MotionEvent.ACTION_MOVE) {
net.setX(MainActivity.this.x);
net.setY(y);
}
return true;
}
});
}
public boolean Collision(ImageView net, ImageView ball)
{
Rect AR = new Rect();
net.getHitRect(AR);
Rect BR = new Rect();
ball.getHitRect(BR);
Render();
return AR.intersect(BR) || AR.contains(BR) || BR.contains(AR);
}
public void Render()
{
if(Collision(net, ball))
{
points++;
TextView score = (TextView)findViewById(R.id.score);
score.setText("Score:" + points);
}
}
}
```<issue_comment>username_1: For me it helps to change maven dependency on Kotlin in pom.xml from
```
org.jetbrains.kotlin
kotlin-stdlib-jdk8
1.2.30
```
to
```
org.jetbrains.kotlin
kotlin-stdlib
1.3.31
```
more info on adding Kotlin to project as maven dependency can be found here:
<https://kotlinlang.org/docs/reference/using-maven.html>
Upvotes: 2 <issue_comment>username_2: Solved this by manually importing source code in intelliJ-idea.
Get source file from [github-release](https://github.com/JetBrains/kotlin/releases)
I selected [1.6.21](https://github.com/JetBrains/kotlin/archive/refs/tags/v1.6.21.zip) and added at following macos path(or find .m2 maven folder at respective OS)
```
/Users/${USER}/.m2/repository/org/jetbrains/kotlin/kotlin-stdlib/
e.g.
/Users/${USER}/.m2/repository/org/jetbrains/kotlin/kotlin-stdlib/1.6.21
```
When IDE prompts to download a source or attach a source, similar to following screen

Verify by checking the implementation of take for String class i.e. `"abcd".take(3)`.
when you go to take method's implementation before IDE decompiles String class and points to `StringsKt.java` but after source is attached it points to `_Strings.kt`
And it looks as
[](https://i.stack.imgur.com/KBT8B.png)
Upvotes: 0
|
2018/03/16
| 632 | 2,304 |
<issue_start>username_0: I have two applications.
From one applications we are coming to second applications.
from the first application
<http://localhost:8080/myfirst-portal/?account_number=RDQsssyMDE=&firstname=UssssGhpbA==&lastname=RGFsssuY2U=&role=csssG9ydGFsZmllbGR1c2Vy>
while coming from first application to second application it will send some data to second application.
second application
<http://localhost:8080/myapp/#/login>
Based on URL prams from the first application we have to redirect to specific component.
routes in angular app
{ path: '', pathMatch: 'full', redirectTo: '/login' },
{ path: '\*\*', pathMatch: 'full', redirectTo: '/login' },
what is best way to achieve this?<issue_comment>username_1: If you have your routes set in the second application, Angular Router will do the job itself.
If in your second app you have configuration similar to this for example:
```
RouterModule.forRoot([
{ path: 'login', component: LoginComponent },
])
```
And you redirect the user with the the link like this:
<http://secondapplink.com/login>
Then, Angular router will find and render Login component itself.
Of course, in production, where the application is deployed, you must set up the URL rewriting so you don't end up with errors. In development mode you should not worry about this.
Upvotes: 0 <issue_comment>username_2: subscribe to route events and then
```
ngOnInit() {
this.sub = this.route.params.subscribe(params => {
this.id = +params['id']; // (+) converts string 'id' to a number
// apply you condition here{
this.router.navigateByUrl(['show_alunos']);
}
});
}
```
and if you want to read detailed explanation you may read this topic
using [click here](https://scotch.io/tutorials/routing-angular-2-single-page-apps-with-the-component-router) having good topics
also, configure you route file
```
RouterModule.forRoot([
{ path: 'login', component: LoginComponent },
{ path: 'nomination', component: HomeComponent }
])
```
try with NavigationEnd
```
previousUrl: string;
constructor(router: Router) {
router.events
.filter(event => event instanceof NavigationEnd)
.subscribe(e => {
console.log('prev:', this.previousUrl);
this.previousUrl = e.url;
});
}
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 534 | 2,229 |
<issue_start>username_0: I'm new to PostScript and I've been tasked to create a postscript file which is supposed to show a small image. The thing is that I can't rely on the run operator, as one of the premisses is that the file is kinda "self-contained".
I've seen quite a few solutions online but they were either incomplete or failed for me. Google's highest listed one was made by "loser droog" ([found under following link](https://stackoverflow.com/questions/15167454/how-include-img-in-postscript)). To test I redid everything again but failed at the point where he called "head spacewar.asc" and my output and his resembled the first view characters but later on they were completely different.
Now I am wondering how I can embed a .jpg/.jpeg file in my PostScripts sourcecode and display it afterwards.
With kind regards.<issue_comment>username_1: Take Ghostscript; use the viewjpeg.ps program to read the original JPEG, and use the ps2write device to create a PostScript file from it. You can also (with some effort) read the PostScript program resulting from that, which might give you some insight into one possible approach to the problem.
The upcoming release (9.23) should preserve the JPEG data unchanged, though I haven't actually tried that. Earlier versions will decompress and recompress the data, you don't want to do that.
To be honest, as I said to someone else earlier this week, if you're going to work in PostScript you need to understand the language. Images and inline image data are moderately complicated. You should get whoever is tasking you with this to send you for some training.
Also, rather than Google for canned answers, try the printed reference works and write your own. The Blue Book (PostScript language tutorial and cookbook), Green Book (PostScript Language program design) and Red Book (PostScript Language Reference Manual) are all available online.
Upvotes: 1 <issue_comment>username_2: Thanky you for the answers. I found a workaround for my case as I had a simple black-white-grey picture to insert.
I had converted it to binary and used `image` to draw it which worked out quite nicely after using [this guide](http://paulbourke.net/dataformats/postscript/).
Upvotes: 0
|
2018/03/16
| 749 | 2,624 |
<issue_start>username_0: I'm attempting to use to Python Tesseract to get text fron an image on my macos desktop and am running into an error that I cannot figure out. I'm running macos High Sierra 10.3.2
My directory is set to my desktop (where the image lives) and I already specified the path to my tesseract executable.
I'm running
```
print(pytesseract.image_to_string(Image.open('test.png'))
```
and getting the following error:
```
File "/Users/name/anaconda2/lib/python2.7/site-packages/pytesseract/pytesseract.py", line 140, in run_and_get_output
run_tesseract(**kwargs)
File "/Users/name/anaconda2/lib/python2.7/site-packages/pytesseract/pytesseract.py", line 116, in run_tesseract
raise TesseractError(status_code, get_errors(error_string))
pytesseract.pytesseract.TesseractError: (1, u'File "/<KEY>ess_cK4lka.PNG", line 1 SyntaxError: Non-ASCII character \'\\x89\' in file /<KEY>tess_cK4lka.PNG on line 1, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details')
```
Any idea what might be causing this and how to get around it? Would be happy to provide any clarifying details.
Thanks!<issue_comment>username_1: Take Ghostscript; use the viewjpeg.ps program to read the original JPEG, and use the ps2write device to create a PostScript file from it. You can also (with some effort) read the PostScript program resulting from that, which might give you some insight into one possible approach to the problem.
The upcoming release (9.23) should preserve the JPEG data unchanged, though I haven't actually tried that. Earlier versions will decompress and recompress the data, you don't want to do that.
To be honest, as I said to someone else earlier this week, if you're going to work in PostScript you need to understand the language. Images and inline image data are moderately complicated. You should get whoever is tasking you with this to send you for some training.
Also, rather than Google for canned answers, try the printed reference works and write your own. The Blue Book (PostScript language tutorial and cookbook), Green Book (PostScript Language program design) and Red Book (PostScript Language Reference Manual) are all available online.
Upvotes: 1 <issue_comment>username_2: Thanky you for the answers. I found a workaround for my case as I had a simple black-white-grey picture to insert.
I had converted it to binary and used `image` to draw it which worked out quite nicely after using [this guide](http://paulbourke.net/dataformats/postscript/).
Upvotes: 0
|
2018/03/16
| 951 | 3,441 |
<issue_start>username_0: In my Angular application, my @ViewChild instance is failing to fill HTL matSort.
**mycomponent.ts:**
```
import { MatSort } from '@angular/material';
export class MyClassComponent {
@ViewChild(MatSort) sort: MatSort;
}
ngOnInit() {
console.log(this.sort); // undefined
}
```
**mycomponent.html:**
```
.......................................................
.......................................................
```
I need to use sortChange from matSort but it is always returned undefined.<issue_comment>username_1: It will be initialized and available in the [`AfterViewInit`](https://angular.io/api/core/AfterViewInit) lifecycle hook:
```
export class MyClassComponent implements AfterViewInit {
@ViewChild(MatSort) sort: MatSort;
ngAfterViewInit() {
console.log(this.sort); // MatSort{}
}
}
```
More info on lifecycle hooks here: <https://angular.io/guide/lifecycle-hooks>.
Upvotes: 5 [selected_answer]<issue_comment>username_2: You have probably already imported at the class level with:
```
import { MatSort, MatTableDataSource } from '@angular/material';
```
This makes the types MatSort and MatTableDataSource available within your .ts class. However, you are also trying to use the related modules as directives in your component's .html file, to do this your app.module.ts needs to be importing them, I do this with a separate NgModule that imports all of the material components I use i.e.
`material\material.module.ts`
```
import { NgModule } from '@angular/core';
import {
MatTableModule,
MatSortModule,
} from '@angular/material';
@NgModule({
imports: [
MatTableModule,
MatSortModule,
],
exports: [
MatTableModule,
MatSortModule,
]
})
export class MaterialModule { }
```
`app.module.ts`
```
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
import { HttpClientModule } from '@angular/common/http';
import { MaterialModule } from './material/material.module';
import { AppComponent } from './app.component';
import { myComponent } from './my/my.component';
@NgModule({
declarations: [
AppComponent,
MyComponent
],
imports: [
BrowserModule,
MaterialModule,
BrowserAnimationsModule,
HttpClientModule,
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
```
Upvotes: 3 <issue_comment>username_3: I'm working with angular 7 what solved the issue for me was adding **MatSortModule** in the **app.modules.ts**. The error I got is the one mentioned here. No indication appeared about the MatSortModule being missing.
Original answer here:
[Angular 5 - Missing MatSortModule](https://stackoverflow.com/a/52312614)
Upvotes: 3 <issue_comment>username_4: In Angular 8 I had to use a setter and optional parameter static=false:
```
@ViewChild(MatSort, {static: false})
set sort(sort: MatSort) {
this.sortiments.sort = sort;
this.selectedSortiments.sort = sort;
}
```
I found the answer from here:
<https://stackoverflow.com/a/47519683/3437868>
<https://github.com/angular/components/issues/15008#issuecomment-516386055>
Upvotes: 2 <issue_comment>username_5: 1. You must add
`@ViewChild(MatSort,{static:false}) sort: MatSort;`
2. Make sure that there no `*ngIf` directory in your table template
Upvotes: 2
|
2018/03/16
| 1,657 | 4,774 |
<issue_start>username_0: I'm trying to get OpenSSL 1.0.2n to build using the latest Android NDK r16b. Building for 32-bit archs (armv7, x86) works just fine, but when I try building for 64-bit archs (arm64, x86\_64) I get a linker error stating that bsd\_signal is undefined:
```
shlib_target=; if [ -n "libcrypto.so.1.0.0 libssl.so.1.0.0" ]; then \
shlib_target="linux-shared"; \
elif [ -n "" ]; then \
FIPSLD_CC="aarch64-linux-android-gcc"; CC=/usr/local/ssl/fips-2.0/bin/fipsld; export CC FIPSLD_CC; \
fi; \
LIBRARIES="-L.. -lssl -L.. -lcrypto" ; \
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f ../Makefile.shared -e \
APPNAME=openssl OBJECTS="openssl.o verify.o asn1pars.o req.o dgst.o dh.o dhparam.o enc.o passwd.o gendh.o errstr.o ca.o pkcs7.o crl2p7.o crl.o rsa.o rsautl.o dsa.o dsaparam.o ec.o ecparam.o x509.o genrsa.o gendsa.o genpkey.o s_server.o s_client.o speed.o s_time.o apps.o s_cb.o s_socket.o app_rand.o version.o sess_id.o ciphers.o nseq.o pkcs12.o pkcs8.o pkey.o pkeyparam.o pkeyutl.o spkac.o smime.o cms.o rand.o engine.o ocsp.o prime.o ts.o srp.o" \
LIBDEPS=" $LIBRARIES -ldl" \
link_app.${shlib_target}
req.o: In function `req_main':
req.c:(.text+0x368): undefined reference to `bsd_signal'
ca.o: In function `ca_main':
ca.c:(.text+0xe90): undefined reference to `bsd_signal'
ecparam.o: In function `ecparam_main':
ecparam.c:(.text+0x30): undefined reference to `bsd_signal'
s_server.o: In function `s_server_main':
s_server.c:(.text+0x32c0): undefined reference to `bsd_signal'
pkcs12.o: In function `pkcs12_main':
pkcs12.c:(.text+0x1134): undefined reference to `bsd_signal'
cms.o:cms.c:(.text+0x98): more undefined references to `bsd_signal' follow
collect2: error: ld returned 1 exit status
```
I saw that `bsd_signal` had been omitted from NDK at one point, but it was added back in NDK 13 (<https://github.com/android-ndk/ndk/issues/160>). Besides, if it were missing entirely I would expect the 32-bit builds to fail as well.
Here are the configurations I'm attempting to use for the arm64 build (this is actually done with a script, which is quite long. To avoid pasting the whole nonsense here, these are the values that wind up being used when it is executed):
```
export MACHINE=armv7
export ARCH=arm64
export CROSS_COMPILE="aarch64-linux-android-"
export ANDROID_SYSROOT="$ANDROID_NDK_ROOT/platforms/android-21/arch-arm64"
export SYSROOT="$ANDROID_SYSROOT"
export NDK_SYSROOT="$ANDROID_SYSROOT"
export ANDROID_NDK_SYSROOT="$ANDROID_SYSROOT"
export ANDROID_API=android-21
export ANDROID_DEV="$ANDROID_NDK_ROOT/platforms/android-21/arch-arm64/usr"
export HOSTCC=gcc
export ANDROID_TOOLCHAIN="$ANDROID_NDK_ROOT/toolchains/aarch64-linux-android-4.9/prebuilt/darwin-x86_64/bin"
export PATH="$ANDROID_TOOLCHAIN":"$PATH"
./Configure shared no-ssl2 no-ssl3 no-comp no-hw no-engine linux-generic64 --openssldir=/usr/local/ssl/android-21 -fPIE -D__ANDROID_API__=android-21 -I$ANDROID_NDK_ROOT/sysroot/usr/include -I$ANDROID_NDK_ROOT/sysroot/usr/include/aarch64-linux-android -B$ANDROID_NDK_ROOT/platforms/android-21/arch-arm64/usr/lib
make clean
make CALC_VERSIONS="SHLIB_COMPAT=; SHLIB_SOVER=" depend
make CALC_VERSIONS="SHLIB_COMPAT=; SHLIB_SOVER=" all
```
I've tried so many different things at this point I couldn't even begin to list them.
Anyone see what I'm missing here?<issue_comment>username_1: 1. I would recommend to use the **make** that is shipped with Android NDK to build with NDK toolchain. If it's not on your PATH, you'll find it at
```
$ANDROID_NDK_ROOT/prebuilt/darwin-x86_64/bin/make
```
I don't think this is a real cause of your problem here.
2. **bsd\_signal** is exported from `platforms/android-21/arch-arm/usr/lib/libc.so`, and also the corresponding `libc.a`, but not from `platforms/android-21/arch-arm64/usr/lib/libc.so`.
3. It is marked as **`__REMOVED_IN(21)`** in the unified headers, so I would expect the compiler to issue a warning about using an undefined function.
4. A possible workaround is to provide a dummy `bsd_signal`, as *<NAME>* proposed [on GitHub](https://github.com/felipejfc/openssl_prebuilt_for_android).
5. The issue with **bsd\_signal** seems to have been resolved in openssl 1.1 series.
6. You have a mistake on command line: use **`-D__ANDROID_API__=21`** instead.
Upvotes: 3 [selected_answer]<issue_comment>username_2: This definitely looks like a case of building against one API level and linking against another (or perhaps a mismatch of the header ABI vs the library ABI?). To rule out any configuration issues (and to simply your build even if it doesn't solve the issue), I'd recommend using a [standalone toolchain](https://developer.android.com/ndk/guides/standalone_toolchain).
Upvotes: 0
|
2018/03/16
| 632 | 2,125 |
<issue_start>username_0: I have an object `comments`. It **can** be undefined in some cases and in the other cases it looks like
```
{
id: 1,
...,
next_page: 'someurl' // can also be undefined in some cases
}
```
I need to grab that `next_page` property if it exist or make it null in other cases. My code:
```
let next_page = null;
if (comments && comments.next_page)
next_page = comments.next_page
```
It works, but i'm feeling like there is some easier way. Am i right?<issue_comment>username_1: Does it have to be null? If so, go with:
```
const next_page = comments && comment.next_page || null
```
If `undefined` is fine as well, I'd suggest:
```
const next_page = comments && comment.next_page
```
Javascript is behaving a bit unexpectedly here. The result of the expression right of the equals sign is `undefined` if `next_page` does not exist on `comment`, but `comment.next_page` if it does exist.
*Edit:
As pointed out in a different comment: be careful when `next_page` is a falsy value as version #1 will return `null`, when `comment.next_page` is `0`.*
Upvotes: 3 [selected_answer]<issue_comment>username_2: The expression in the `if()` block returns the value of the expression (not boolean), so you can condense it into one line:
```
let next_page = ( comments && comments.next_page );
```
Upvotes: 0 <issue_comment>username_3: If the property is missing maybe you can have a template object and won't have to assign it.
```
const template = { id: null, nextPage: nul};
const useThis = Object.assign({}, template, dataMaybeHasNextPage)
// if dataMaybeHasNextPage object has nextPage then useThis.nextPage would a url
```
That way nextPage would always be set.
<https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign>
```js
const defaultCommentObj = { id: null, nextPage: null};
const commentA = Object.assign({}, defaultCommentObj, {
id: 1,
nextPage: 'https://google.com',
});
const commentB = Object.assign({}, defaultCommentObj, {
id: 2,
});
console.log('A : ', commentA);
console.log('B : ', commentB);
```
Upvotes: 0
|
2018/03/16
| 1,182 | 3,070 |
<issue_start>username_0: I know this is basic but I've been struggling for a few hours now and I can't seem to apply one of the many ways there are to convert a string to datetime so I can save it in the database in this format `2018-03-16 00:12:17.555372`. Thanks ahead
This is the string output in the console.
```
params[:event][:start_date]
"03/28/2018 1:46 AM"
```
[EDIT] Following some leads I've come up with smething really dirty maybe someone can help refactor I'm supressing AM or PM because I don't know how to parse that I know it's awfull any help is appreciated!
```
if !params[:event][:start_date].empty?
start_date = params[:event][:start_date]
start_date = start_date.gsub(/[AMP]/, '').squish
a = start_date.split('/')
tmp = a[0]
a[0] = a[1]
a[1] = tmp
a = a.split(',').join('/')
start_date = Time.parse(a)
end
if !params[:event][:end_date].empty?
end_date = params[:event][:end_date]
end_date = end_date.gsub(/[AMP]/, '').squish
a = end_date.split('/')
tmp = a[0]
a[0] = a[1]
a[1] = tmp
a = a.split(',').join('/')
end_date = Time.parse(a)
end
```<issue_comment>username_1: You can parse it like so in ruby:
>
> Parses the given representation of date and time, and creates a DateTime object. This method does not function as a validator.
>
>
>
```
DateTime.parse('2001-02-03T04:05:06+07:00')
#=> #
DateTime.parse('20010203T040506+0700')
#=> #
DateTime.parse('3rd Feb 2001 04:05:06 PM')
#=> #
```
Not entirely sure if the string you supplied can be parsed, here is the link to the ruby docs on datetimes [Docs](http://ruby-doc.org/stdlib-2.5.0/libdoc/date/rdoc/DateTime.html#M000485)
Upvotes: 2 <issue_comment>username_2: You can use [DateTime](http://ruby-doc.org/stdlib/libdoc/date/rdoc/DateTime.html) to parse the date from a specific format.
if the format you are looking to parse is "03/28/2018 1:46 AM" then you can do this.
```
date = DateTime.strptime('03/28/2018 1:46 AM', '%m/%d/%Y %I:%M %p')
# date to ISO 8601
puts date.to_time
# output: 2018-03-28 07:16:00 +0530
puts date.strftime("%m/%d/%Y")
# output: 03/28/2018
```
Date formats:
```
Date (Year, Month, Day):
%Y - Year with century (can be negative, 4 digits at least)
-0001, 0000, 1995, 2009, 14292, etc.
%m - Month of the year, zero-padded (01..12)
%_m blank-padded ( 1..12)
%-m no-padded (1..12)
%d - Day of the month, zero-padded (01..31)
%-d no-padded (1..31)
Time (Hour, Minute, Second, Subsecond):
%H - Hour of the day, 24-hour clock, zero-padded (00..23)
%k - Hour of the day, 24-hour clock, blank-padded ( 0..23)
%I - Hour of the day, 12-hour clock, zero-padded (01..12)
%l - Hour of the day, 12-hour clock, blank-padded ( 1..12)
%P - Meridian indicator, lowercase (``am'' or ``pm'')
%p - Meridian indicator, uppercase (``AM'' or ``PM'')
%M - Minute of the hour (00..59)
```
You can refer to all formats [here](http://ruby-doc.org/stdlib/libdoc/date/rdoc/DateTime.html#method-i-iso8601).
Upvotes: 5 [selected_answer]
|
2018/03/16
| 802 | 2,941 |
<issue_start>username_0: I have a requirement where coupons are associated to each state. Below is the table structure
```
╔═════════╦═══════╗
║ Coupon ║ State ║
╠═════════╬═══════╣
║ Coupon1 ║ CA ║
║ Coupon2 ║ CA ║
║ Coupon3 ║ NULL ║
╚═════════╩═══════╝
```
If no coupons are found for the state the default coupon will be shown to the users.How can i change the below lambda expression in such a way to achieve the result. For instance if the passed in state is "CA" the below expression is returning both values and getting the first set instead of getting the accurate result (get coupons associated to CA since this is already found)
```
var loggedinUserCoupons = loggedinUser.Coupons.Where(x => x.StateId == loggedinUsers.StateID || x.StateId == null).FirstOrDefault();
```<issue_comment>username_1: You'll need to do this in two steps.
First, attempt to find all the `Coupon`'s that have their state id as say `CA` like so:
```
var loggedinUserCoupons =
loggedinUser.Coupons
.Where(x => x.StateId == loggedinUsers.StateID)
.ToList();
```
if no elements where found then we can get the default `Coupon` you've provided like so:
```
if(loggedinUserCoupons.Count == 0)
loggedinUserCoupons = new List() {
loggedinUser.Coupons
.FirstOrDefault(x => x.StateId == null) };
```
If there will always be a default `Coupon` at the end of the list then you can simply do:
```
if(loggedinUserCoupons.Count == 0)
loggedinUserCoupons = new List() {
loggedinUser.Coupons
.Last() };
```
Upvotes: 0 <issue_comment>username_2: The [`DefaultIfEmpty`](https://msdn.microsoft.com/en-us/library/bb355419.aspx) function lets you provide a fallback if your main query returns no results:
```
var fallback = loggedinUser.Coupons
.Where(x => x.StateId == null)
.First();
var loggedinUserCoupons = loggedinUser.Coupons
.Where(x => x.StateId == loggedinUsers.StateID)
.DefaultIfEmpty(fallback);
```
If it's important to enumerate the source only once, you can use this:
```
IEnumerable WhereWithDefault(this IEnumerable source,
Predicate primaryCondition,
Predicate fallbackCondition)
{
var fallback = new List();
bool foundPrimary = false;
foreach( T t in source ) {
if (primaryCondition(t)) { foundPrimary = true; fallback.Clear(); yield return t; }
else if (!foundPrimary && fallbackCondition(t)) { fallback.Add(t); }
}
if (foundPrimary) yield break;
foreach (T t in fallback) yield return t;
}
var loggedinUserCoupons = loggedinUser.Coupons
.WhereWithDefault(x => x.StateId == loggedinUsers.StateID, x => x.StateId == null);
```
But note that the first approach is better for LINQ-to-SQL, because the database only has to return objects that match the two IDs and not the whole table.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 737 | 2,844 |
<issue_start>username_0: I am trying to create code for sending document for signature request using docusign API:
1. I already downloaded the PHP SDK
2. I already opened a developer sandbox account
3. But after filling in some info in the PHP SDK file and run, it gave me the above error.
I am guessing the error may mean that I have not gone through the steps to configure client and request authorization code.
I tried to follow the steps on that page, but the instructions are so vague and difficult to follow. I clicked on the 'use sandbox account data' on the upper right corner of the C# code, but it only asked me to log in to my account and directed me back to the same page.
I don't know where to start from there.
I need a more step-by-step instruction.
Thanks<issue_comment>username_1: <NAME> 401 translates to an unauthorized API request and the response should say which part is wrong aka the username / password or Integrator Key is incorrect.
I really suggest using PostMan Client and / or SoapUI to validate you can get login information and see the actual error returned, as it will say Integrator Key or Username/Password in the response. PostMan or SoapUI will issue the actual API calls successfully (Demo or Prod once you have passed prod API certification) showing you the actual response without having to worry about coding typo's, properties of objects or debugging logging to see what really came back.
This allows you to learn the DocuSign API while using the PHP SDK. You did the right thing asking for help here on Stack Overflow.
I have a couple links for you to get started:
1. DocuSign Developer Center where you probably found the PHP SDK <https://developers.docusign.com/esign-rest-api/sdk-tools>
2. Link for info on PostMan (older version, you can get the latest one - aka Orange Rocket Man logo vs blue world, for Chrome or Mac Standalone - <https://blog.grigsbyconsultingllc.com/postman-rest-client-a-google-chrome-app/>
3. X-DocuSign-Authentication Header Q&A on StackOverflow [How should the header X-DocuSign-Authentication be used for REST and SOAP?](https://stackoverflow.com/questions/22028119/how-should-the-header-x-docusign-authentication-be-used-for-rest-and-soap)
4. I like the concept of the DocuSign API explorer to start with, sad part is it doesn't work against prod, so you still have to use something else when you move from demo to prod.
Best of Luck and Enjoy your DocuSign API journey!
Upvotes: 1 <issue_comment>username_2: You can set $config->setDebug(true) in dev environment and have all the errors displayed.
Upvotes: 1 <issue_comment>username_3: to fetch error use this
```
try{
//send request
}
catch (DocuSign\eSign\ApiException $ex){
print_R($ex); //<--- print normal readeble error
echo "Exception: " . $ex->getMessage() . "\n";
}
```
Upvotes: 0
|
2018/03/16
| 314 | 1,120 |
<issue_start>username_0: I'm trying to utilize the semantic ui accordion behavior to open and close accordions based on a button click, however this is not functioning as I would expect.
From the documentation:
```
$('.ui.accordion').accordion('behavior', argumentOne, argumentTwo...);
```
I am attempting to utilize the `toggle (index)` behavior. My expectation is
```
$('.ui.accordion').accordion('toggle', 1);
```
Would open the accordion at index 1 on the page and close the others, but adding that behavior event to a click event on a button does not toggle any accordion.
CodePen of the issue here <https://codepen.io/jasonyost/pen/ZxOvPW><issue_comment>username_1: This is because you are defining a new accordion every time take a look at this code which works.
```
Index 0
Index 0 shown
Index 1
Index 1 shown
Index 2
Index 2 shown
Toggle
```
<https://jsfiddle.net/d2s8nhw3/>
Upvotes: 2 [selected_answer]<issue_comment>username_2: It's zero 0 based, which means for the first level of the accordions your code would be:
```
$('.ui.accordion').accordion('toggle', 0);
```
Upvotes: 2
|
2018/03/16
| 1,041 | 4,152 |
<issue_start>username_0: I'm starting with unity since a few weeks and i'm stuck with some displaying problems.
You can see what I want to achieve here : 
On several of my pages I have a title with an icon on the left and I want it to be at 40px on the left of the title, no matter the size of it.
So I did a prefab with an empty Text for the title, an empty RawImage for the icon and I put that prefab on the center top of the screen.
I link the following C# script on my prefab with the 'string' parameter for the title and the 'Texture' parameter for the icon to display.
```
using System.Linq;
using UnityEngine;
using UnityEngine.UI;
public class TitlePage : MonoBehaviour {
public string title;
public Texture texture;
private Text titlePageText;
private RawImage iconPage;
void Awake()
{
titlePageText = gameObject.GetComponentsInChildren().First();
iconPage = gameObject.GetComponentsInChildren().First();
}
void Start()
{
titlePageText.text = title;
}
void Update()
{
iconPage.rectTransform.anchoredPosition = new Vector2(-titlePageText.rectTransform.rect.width / 2 - 40, iconPage.rectTransform.anchoredPosition.y);
iconPage.texture = texture;
}
}
```
The title is set well in the "Start" function, the problem is with the icon. The code in the "Update" function put the icon at the wanted position if it's in the "Update" function because in the "Start", 'titlePageText.rectTransform.rect.width' give 0. But with that code in the "Update" function, the icon start with a display by default in the center and nearly instantly it moves on the left but we can see that position changing.
Maybe there's a solution to avoid that ?
Maybe I started in a wrong way ?
(I don't put my title's text hardly because I want my application to be multilingual so instead of displaying the 'string' in parameter, I'll display the corresponding translation and it's why I can't set a fixed position because two translations are not the same length.)
Thanks in advance !<issue_comment>username_1: It sounds like the width of the titlePageText is not obtaining a value immediately in Start(), which is why it is initially zero. After a couple of frames when a non-zero value is found, it finally shifts your iconPage over but your object already has a texture at this point. You really shouldn't be doing this in Update() because it means it will constantly be setting the anchoredPosition and texture of your iconPage every frame.
One solution to avoid this would be to use a coroutine with `yield` to wait for your components to initialize and then set the position. Note that this is a bit of a hack since you are introducing some forced loading time.
```
public string title;
public Texture texture;
private Text titlePageText;
private RawImage iconPage;
void Awake()
{
titlePageText = gameObject.GetComponentsInChildren().First();
iconPage = gameObject.GetComponentsInChildren().First();
}
void Start()
{
titlePageText.text = title;
StartCoroutine(initializeComponenets());
}
IEnumerator initializeComponenets()
{
yield return new waitForSeconds(.05f); //might need longer wait here
iconPage.rectTransform.anchoredPosition = new Vector2(-titlePageText.rectTransform.rect.width / 2 - 40, iconPage.rectTransform.anchoredPosition.y);
iconPage.texture = texture;
}
```
Upvotes: 0 <issue_comment>username_2: I am not entirely sure what your problem is exactly, but it does sound like you are trying to solve something from a slightly awkward angle. IS there a reason for not using Unity's layout (or even auto layout groups) to achieve your goal?
I just did a quick test: added 'Content Size Fitter component' to the text object with Horizontal set to Preferred, then placed an image as a child of that text object, when I set the anchor X to 0 and pivot X to 1, the image now follows the left hand side of the text, no matter what size.
This has the advantage of doing absolutely zero work unless text (or its position) changes, in which case everything is handled automagically by Unity's UI system
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,622 | 4,324 |
<issue_start>username_0: I am looping through different data.tables and the variables in the data.table. But I'm having trouble referencing the variables inside of the `for` loop
```
dt1 <- data.table(a1 = c(1,2,3), a2 = c(4,5,2))
dt2 <- data.table(a1 = c(1,43,1), a2 = c(52,4,1))
```
For each datatable, I want to find the average of each variable for observations where that variable != 1. Below is my attempt which doesn't work:
```
dtname = 'dt'
ind = c('1', '2')
for (d in ind) {
df <- get(paste0('dt', d, sep=''))
for (v in ind) {
varname <- paste0('a', v, sep='')
df1 <- df %>%
filter(varname!=1) %>%
summarise(varname = mean(varname))
print(df1)
}
}
```
The desired output is to take and print the average of a1 = c(2,3) in dt1, the average of a2 = (4,5,2) in dt1, the average of a1 = c(43) in dt2, the average of a2 = c(54,4) in dt2.
What am I doing wrong here? In general, how should I reference a variable inside of a `for` loop (varname) that is pieced together by using the looping index (v) and something else?<issue_comment>username_1: Would go for a vectorised approach. You are using R!
One possible way:
```
require(dplyr)
dt1[dt1==1] <- NA #replace 1 with NA
dt1 %>% summarise_all(mean, na.rm = TRUE) #mean of all columns.
a1 a2
1 2.5 3.666667
```
Upvotes: 0 <issue_comment>username_2: For a purely `data.table` way, I would combine the different `data.tables` and compute the averages:
```
# Concatenate the data.tables:
all_dt <- rbind("dt1" = dt1, "dt2" = dt2, idcol = "origin")
all_dt
# origin a1 a2
# 1: dt1 1 4
# 2: dt1 2 5
# 3: dt1 3 2
# 4: dt2 1 52
# 5: dt2 43 4
# 6: dt2 1 1
# Melt so that "a1" and "a2" are labels in a group column:
all_dt <- melt(all_dt, id.vars="origin")
all_dt
# origin variable value
# 1: dt1 a1 1
# 2: dt1 a1 2
# 3: dt1 a1 3
# 4: dt2 a1 1
# 5: dt2 a1 43
# 6: dt2 a1 1
# 7: dt1 a2 4
# 8: dt1 a2 5
# 9: dt1 a2 2
# 10: dt2 a2 52
# 11: dt2 a2 4
# 12: dt2 a2 1
# Compute averages by each data.table and column group, ignoring 1s:
all_dt[value != 1, .(mean = mean(value)), by = .(origin, variable)]
# origin variable mean
# 1: dt1 a1 2.500000
# 2: dt2 a1 43.000000
# 3: dt1 a2 3.666667
# 4: dt2 a2 28.000000
```
Upvotes: 2 <issue_comment>username_3: It is not very clear what you are trying to do, but if you want to replace all of the rows in the dataframe with the mean of the previous data frame's columns, I would suggest using a dataframe type instead as it is easier to index. Here is code that should work:
```
dt1 <- data.frame(a1 = c(1,2,3), a2 = c(4,5,2))
dt2 <- data.frame(a1 = c(1,43,1), a2 = c(52,4,1))
dtname = 'dt'
ind = c('1', '2')
for (d in ind){
df <- get(paste0('dt', d, sep=''))
for (i in 1:nrow(df)){
for (j in 1:ncol(df)){
if (df[i,j] !=1){
df[,j]<- mean(df[,j])
}
}
print(df)
}
}
```
The reason your code was not working before was because the variables were being treated like strings, not actual variables. You can see this by printing the data type of variances:
```
dtname = 'dt'
ind = c('1', '2')
for (d in ind) {
df <- get(paste0('dt', d, sep=''))
for (v in ind) {
varname <- paste0('a', v, sep='')
print(class(varname))
}
}
```
Which just returns "character"
Another solution using variable names and the dataframe type would be to index the df as follows:
```
df[["varname"]]
```
Here are two helpful links for this kind of operation:
\* [link 1: How to find the mean of a column](https://stackoverflow.com/questions/23163863/r-need-help-to-calculate-the-mean-of-a-column-in-a-data-frame)
\* [link 2: Data frames](http://www.r-tutor.com/r-introduction/data-frame)
Upvotes: 0 <issue_comment>username_4: I figured out a solution based on the comments of @Amar and @Scott Richie
```
for (d in ind) {
df <- get(paste0('dt', d, sep=''))
for (v in ind) {
varname <- paste0('a', v, sep='')
df1 <- df[eval(as.name(varname))!=1, .(mean =
mean(eval(as.name(varname))))]
print(df1)
}
}
```
Thanks EVERYONE!
Upvotes: 1
|
2018/03/16
| 1,400 | 3,695 |
<issue_start>username_0: How to reduce / aggregate multiple fields in a table? This does not seem efficient:
```
r.object(
'favorite_count',
r.db('twitterdb').table('tweets').map(tweet => tweet('favorite_count')).avg().round(),
'retweet_count',
r.db('twitterdb').table('tweets').map(tweet => tweet('retweet_count')).avg().round()
)
```
(Expected) Result:
```
{
"favorite_count": 17 ,
"retweet_count": 156
}
```<issue_comment>username_1: Would go for a vectorised approach. You are using R!
One possible way:
```
require(dplyr)
dt1[dt1==1] <- NA #replace 1 with NA
dt1 %>% summarise_all(mean, na.rm = TRUE) #mean of all columns.
a1 a2
1 2.5 3.666667
```
Upvotes: 0 <issue_comment>username_2: For a purely `data.table` way, I would combine the different `data.tables` and compute the averages:
```
# Concatenate the data.tables:
all_dt <- rbind("dt1" = dt1, "dt2" = dt2, idcol = "origin")
all_dt
# origin a1 a2
# 1: dt1 1 4
# 2: dt1 2 5
# 3: dt1 3 2
# 4: dt2 1 52
# 5: dt2 43 4
# 6: dt2 1 1
# Melt so that "a1" and "a2" are labels in a group column:
all_dt <- melt(all_dt, id.vars="origin")
all_dt
# origin variable value
# 1: dt1 a1 1
# 2: dt1 a1 2
# 3: dt1 a1 3
# 4: dt2 a1 1
# 5: dt2 a1 43
# 6: dt2 a1 1
# 7: dt1 a2 4
# 8: dt1 a2 5
# 9: dt1 a2 2
# 10: dt2 a2 52
# 11: dt2 a2 4
# 12: dt2 a2 1
# Compute averages by each data.table and column group, ignoring 1s:
all_dt[value != 1, .(mean = mean(value)), by = .(origin, variable)]
# origin variable mean
# 1: dt1 a1 2.500000
# 2: dt2 a1 43.000000
# 3: dt1 a2 3.666667
# 4: dt2 a2 28.000000
```
Upvotes: 2 <issue_comment>username_3: It is not very clear what you are trying to do, but if you want to replace all of the rows in the dataframe with the mean of the previous data frame's columns, I would suggest using a dataframe type instead as it is easier to index. Here is code that should work:
```
dt1 <- data.frame(a1 = c(1,2,3), a2 = c(4,5,2))
dt2 <- data.frame(a1 = c(1,43,1), a2 = c(52,4,1))
dtname = 'dt'
ind = c('1', '2')
for (d in ind){
df <- get(paste0('dt', d, sep=''))
for (i in 1:nrow(df)){
for (j in 1:ncol(df)){
if (df[i,j] !=1){
df[,j]<- mean(df[,j])
}
}
print(df)
}
}
```
The reason your code was not working before was because the variables were being treated like strings, not actual variables. You can see this by printing the data type of variances:
```
dtname = 'dt'
ind = c('1', '2')
for (d in ind) {
df <- get(paste0('dt', d, sep=''))
for (v in ind) {
varname <- paste0('a', v, sep='')
print(class(varname))
}
}
```
Which just returns "character"
Another solution using variable names and the dataframe type would be to index the df as follows:
```
df[["varname"]]
```
Here are two helpful links for this kind of operation:
\* [link 1: How to find the mean of a column](https://stackoverflow.com/questions/23163863/r-need-help-to-calculate-the-mean-of-a-column-in-a-data-frame)
\* [link 2: Data frames](http://www.r-tutor.com/r-introduction/data-frame)
Upvotes: 0 <issue_comment>username_4: I figured out a solution based on the comments of @Amar and @Scott Richie
```
for (d in ind) {
df <- get(paste0('dt', d, sep=''))
for (v in ind) {
varname <- paste0('a', v, sep='')
df1 <- df[eval(as.name(varname))!=1, .(mean =
mean(eval(as.name(varname))))]
print(df1)
}
}
```
Thanks EVERYONE!
Upvotes: 1
|
2018/03/16
| 211 | 730 |
<issue_start>username_0: We are using Jest for UI TDD of React application.
We have following structure of component redering:
```
```
We are trying to get Row with link.
We are trying following code.
```
const rowProps = wrapper.find(Row).at(0)
```
How to locate to row?<issue_comment>username_1: You should be able to do:
```
const rowProps = wrapper.find('Row Link')
```
Upvotes: 0 <issue_comment>username_2: If you want to find `Link` component - use @Tony answer.
But if you want to locate `Row` component, which contains `Link` component - use this code (it works only if you have one `Link` component inside mounted wrapper)
```
const rowWithLink = wrapper.find('Link').parent()
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 317 | 1,169 |
<issue_start>username_0: **Scenario**
A teacher at a university is able to search for a student by using their first name and last name. Similarly, a student can search for a teacher using their first name and last name.
**What I've Done**
I have used an association line with a label, *searchesFor*, to denote that a teacher can search for a student and vice versa. I have also used the *no more than one* multiplicity notation.
[](https://i.stack.imgur.com/ZnluD.png)
**Question**
If I don't use a filled arrow next to *searchesFor* to indicate the direction of the relationship, would my solution be naturally read as stated in the scenario?<issue_comment>username_1: You should be able to do:
```
const rowProps = wrapper.find('Row Link')
```
Upvotes: 0 <issue_comment>username_2: If you want to find `Link` component - use @Tony answer.
But if you want to locate `Row` component, which contains `Link` component - use this code (it works only if you have one `Link` component inside mounted wrapper)
```
const rowWithLink = wrapper.find('Link').parent()
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,392 | 4,063 |
<issue_start>username_0: So I implemented this Pascal Triangle program in C, and it works well up until the 13th line, where the values onwards are no longer correct. I believe the combination function is correct, a k combination of n elements can be written with factorials, and it says so on the combination Wikipedia page hehe. Here's the code:
```
#include
int factorial(int number);
int combination(int n, int k);
int main() {
int lines;
int i, j;
printf("Number of Pascal Triangle lines: ");
scanf("%d", &lines);
for (i = 0; i <= lines; i++) {
for (j = 0; j <= i; j++)
printf("%d ", combination(i, j));
printf("\n");
}
}
int combination(int n, int k) {
int comb;
comb = (factorial(n)) / (factorial(k) \* factorial(n - k));
return comb;
}
int factorial(int number) {
int factorial = 1;
int i;
for (i = 1; i <= number; i++)
factorial = factorial \* i;
return factorial;
}
```<issue_comment>username_1: Computing Pascal's triangle straight from the binomial formula is a bad idea because
* the computation of the factorial in the numerator is overflow-prone,
* every computation requires the evaluation of about `n` products (`k + n - k`) and a division (plus `n!` computed once), for a total of `n²` per row.
A much more efficient solution is by means of Pascal's rule (every element is the sum of the two elements above it). If you store a row, the next row is obtained with just `n` additions. And this only overflows when the element value is too large to be representable.
---
In case you only need the `n`-th row, you can use the recurrence
```
C(n,k) = C(n,k-1).(n-k+1)/k
```
This involves `2n` additions, `n` multiplications and `n` divisions, and can overflow even for representable values. Due to the high cost of divisions, for moderate `n` it is probably better to evaluate the whole triangle ! (Or just hard-code it.)
If you need a single element, this recurrence is attractive. Use symmetry for `k` above `n/2` (`C(n,k) = C(n,n-k)`).
Upvotes: 2 <issue_comment>username_2: Your implementation cannot handle even moderately large values of `n` because factorial(n) causes an arithmetic overflow for `n >= 13`.
Here is a simplistic recursive implementation that can handle larger values, albeit very slowly:
```
#include
int combination(int n, int k) {
if (n < 0 || k < 0 || k > n)
return 0;
if (k == 0 || k == n)
return 1;
return combination(n - 1, k - 1) + combination(n - 1, k);
}
int main() {
int lines, i, j;
printf("Number of Pascal Triangle lines: ");
if (scanf("%d", &lines) != 1)
return 1;
for (i = 0; i <= lines; i++) {
for (j = 0; j <= i; j++) {
printf("%d ", combination(i, j));
}
printf("\n");
}
return 0;
}
```
Notes:
* This implementation illustrates how vastly inefficient recursive implementations can become.
* Since you are printing a complete triangle, you should store intermediary results and compute one full line at a time from the previous line very efficiently, but still limited by the range of `unsigned long long`, 67 lines.
Here is a faster alternative:
```
#include
int main() {
int lines, i, j;
printf("Number of Pascal Triangle lines: ");
if (scanf("%d", &lines) != 1 || lines < 0 || lines > 67)
return 1;
unsigned long long comb[lines + 1];
for (i = 0; i <= lines; i++) {
comb[i] = 0;
}
comb[0] = 1;
for (i = 0; i <= lines; i++) {
for (j = i; j > 0; j--) {
comb[j] += comb[j - 1];
}
for (j = 0; j <= i; j++) {
printf("%llu ", comb[j]);
}
printf("\n");
}
return 0;
}
```
Upvotes: 0 <issue_comment>username_3: Hope the following code might help ::
```
/*elements of the pascal's trianlge for 10 rows*/
#include
int main()
{
int p[11][11];
int i,j,k;
for(i=1;i<=10;i++)
{
/\*creating whitespaces\*/
for(k=i;k<=10;k++)
{
printf(" ");
}
for(j=1;j<=i;j++)
{
/\*printing the boundary elements i.e. 1\*/
if(j==1 || i==j)
{
p[i][j]=1;
printf("%3d ",p[i][j]);
}
/\*printing the rest elements\*/
else
{
p[i][j]=p[i-1][j-1]+p[i-1][j];
printf("%3d ",p[i][j]);
}
}
printf("\n");
}
}
```
Thanks
Upvotes: 0
|
2018/03/16
| 831 | 3,080 |
<issue_start>username_0: I am working on a problem where I have to pass an rpsec test. The problem is that the method is using the same name as a built in ruby method `.count`
given that I cannot change the rspec test, is it possible to override `.count` to behave differently? if not, is there a better way to get around this?
here is the rspec test I am trying to pass
```
subject = FinancialSummary.one_day(user: user, currency: :usd)
expect(subject.count(:deposit)).to eq(2)
```
my code:
```
class FinancialSummary
def self.one_day(user: user, currency: currency)
one_day_range = Date.today.beginning_of_day..Date.today.end_of_day
find_transaction(user.id, currency).where(created_at: one_day_range)
end
def self.find_transaction(user_id, currency)
Transaction.where(user_id: user_id,
amount_currency: currency.to_s.upcase
)
end
end
```
output:
```
[#,
#,
#]
```
it is printing out, what I believe to be the correct information, up until the test attempts to `count` the transactions by their `category: 'deposit'`. Then I get this error message:
`ActiveRecord::StatementInvalid: SQLite3::SQLException: no such column: deposit: SELECT COUNT(deposit) FROM "transactions" WHERE "transactions"."user_id" = ? AND "transactions"."amount_currency" = ?`
EDITED FOR MORE INFO<issue_comment>username_1: It's not clear what object the count message is being sent to because I don't know what `FinancialSummary.one_day(user: user, currency: :usd)` returns, but it seems like you are saying `count` is a method on whatever it returns, that you can't change. What does `FinancialSummary.one_day(user: user, currency: :usd).class` return?
Perhaps one solution would be to alias it on that object by adding `alias_method :count, :account_count` and then in your test calling `expect(subject.account_count(:deposit)).to eq(2)`
It would be easier if you could post the FinancialSummary#one\_day method in your question.
Upvotes: 0 <issue_comment>username_2: **Some Assumptions Were Made in the Writing of this answer and modifications may be made based on updated specifications**
Overriding `count` is a bad idea because others who view or use your code will have no idea that this is not the count they know and understand.
Instead consider creating a scope for this like
```
class FinancialSummary < ApplicationRecord
scope :one_day, ->(user:,currency:) { where(user: user, currency: currency) } #clearly already a scope
scope :transaction_type, ->(transaction_type:) { where(category: transaction_type) }
end
```
then the test becomes
```
subject = FinancialSummary.one_day(user: user, currency: :usd)
expect(subject.transaction_type(:deposit).count).to eq(2)
```
SQL now becomes:
```
SELECT COUNT(*)
FROM
"transactions"
WHERE
"transactions"."user_id" = ?
AND "transactions"."amount_currency" = "usd"
AND "transactions"."category" = "deposit"
```
Still very understandable and easy to read without the need to destroy the count method we clearly just used.
Upvotes: 1
|
2018/03/16
| 1,225 | 4,196 |
<issue_start>username_0: I am looking for any code example that deploys Helm chart without via CLI call. The reason behind this is:
1. My company got couple existing pipelines written with AWS CodePipeline / CodeBuild / CodeDeploy. They don't like to investigate more time on re-writing all pipelines.
2. My company does not have any plan to maintain extra instance(s) just for deployment.
3. AWS CodePipeline could trigger Lambda, and theoretically I could write some Python code to do the job if Helm provides Python client.
Currently I steal Lambda function from this:
<https://github.com/aws-samples/aws-kube-codesuite>
Whereas this does not provide same level features as Helm does. We have to provide our release-name system, template system, etc. In other words, it functions poorly if I have big change on the manifest, and does not handle the first time deployment (means deploying manifest to an empty K8S cluster) Also we use Github, although not really relevant.
For python client of Helm chart, the best I could find is pyhelm listed on pip. But it does not have sample code for calling deployment, and from some user group / forum feedback the installation process is painful. Somebody also points to azure/draft and another repo but I have no clue on how to come out a solid example that only use Python to deploy Helm chart.
Please let me know where I miss. Thanks.<issue_comment>username_1: I suggest using the [official python client](https://github.com/kubernetes-client/python) for Kubernetes rather than Helm. It requires that you write deployments, services, persistent volumes, etc yourself, but it's going to be quicker than any other approach. Keep in mind that you'll have to work out how to do cluster authentication to make changes through the client but there are several examples in the repo. I don't know enough about AWS Lambda to offer anything on how/if it would work.
Helm is a wonderful product but oriented towards the command line rather than use of its API which requires GRPC. Of course, it's possible to create a Python library for Tiller (Helm's API server) using the Helm proto file and GRPC client for Python, but no one seems to built one that has found traction in the community.
Upvotes: 2 <issue_comment>username_2: You can find my fork of pyhelm with examples and Python3 support.
```
git clone <EMAIL>:andriisoldatenko/pyhelm.git
cd pyhelm && python setup.py install
```
How to use Pyhelm
-----------------
### First you need repo\_url and chart name to download chart
```
from pyhelm.repo import from_repo
chart_path = chart_versions = from_repo('https://kubernetes-charts.storage.googleapis.com/', 'mariadb')
print(chart_path)
"/tmp/pyhelm-kibwtj8d/mongodb"
```
### Now you can see that chart folder of mongodb::
```
In [3]: ls -la /tmp/pyhelm-kibwtj8d/mongodb
total 40
drwxr-xr-x 7 andrii wheel 224 Mar 21 17:26 ./
drwx------ 3 andrii wheel 96 Mar 21 17:26 ../
-rwxr-xr-x 1 andrii wheel 5 Jan 1 1970 .helmignore*
-rwxr-xr-x 1 andrii wheel 261 Jan 1 1970 Chart.yaml*
-rwxr-xr-x 1 andrii wheel 4394 Jan 1 1970 README.md*
drwxr-xr-x 8 andrii wheel 256 Mar 21 17:26 templates/
```
### Next step to build ChartBuilder instance to manipulate with Tiller::
```
from pyhelm.chartbuilder import ChartBuilder
chart = ChartBuilder({'name': 'mongodb', 'source': {'type': 'directory', 'location': '/tmp/pyhelm-kibwtj8d/mongodb'}})
# than we can get chart meta data etc
In [9]: chart.get_metadata()
Out[9]:
name: "mongodb"
version: "0.4.0"
description: "Chart for MongoDB"
```
### Install chart::
```
from pyhelm.chartbuilder import ChartBuilder
from pyhelm.tiller import Tiller
chart = ChartBuilder({'name': 'mongodb', 'source': {'type': 'directory', 'location': '/tmp/pyhelm-kibwtj8d/mongodb'}})
chart.install_release(chart.get_helm_chart(), dry_run=False, namespace='default')
Out[9]:
release {
name: "fallacious-bronco"
info {
status {
code: 6
}
first_deployed {
seconds: 1521647335
nanos: 746785000
}
last_deployed {
seconds: 1521647335
nanos: 746785000
}
Description: "Dry run complete"
}
chart {....
}
```
Upvotes: 2
|
2018/03/16
| 739 | 2,900 |
<issue_start>username_0: So, when using the code:
```
textField.setOnAction();
```
It only functions with the Enter key, but I'm just wondering if there's some sort of EventHandler for a TextField and TextArea that can save the text within its field to an object's instance variable when a user clicks away or to another TextField? For example:
```
textField.setOnMouse(e ->
{
object.setText(textField.)
});
```
This code would save the information within it's field, once the user clicks away from the TextField.<issue_comment>username_1: You can respond to it losing focus:
```
textField.focusedProperty().addListener((obs, wasFocused, isNowFocused) -> {
if (! isNowFocused) {
// do whatever you need
}
});
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Assuming that the required commit (aka: save user input to a model/data object) semantics are
* commit-on-action: that's typically triggered by ENTER (available on TextField, but not on TextArea)
* commit-on-focusLost: that's triggered when focus is transfered from an input control to somewhere else, can happen by mouse or tab, f.i.
and that they are not:
* commit-on-change: that is update the data whenever the input control's textProperty is changed
The suggested [approach by James](https://stackoverflow.com/a/49311808/203657) is to *also* register a focusListener on the control that does the same job as the action handler:
```
// in actionHandler
textField.setOnAction(e -> myData.setText(textField.getText()));
// in listener when receiving a focusLost
textField.focusedProperty().addListener((... ) -> myData.setText(textField.getText()))
```
Perfectly valid!
Just pointing to an alternative: FX has support to achieve both-in-one, and that's a TextFormatter. It guarantees to update its value on both commit-on-action and commit-on-focusLost, but not on typing. So we can bind the data property (bidirectionally, if needed) to the formatter's value and naturally have the required commit semantic.
A snippet that demonstrates its usage (the Labels are just stand-ins for the data):
```
private Parent createTextContent() {
TextFormatter fieldFormatter = new TextFormatter<>(
TextFormatter.IDENTITY\_STRING\_CONVERTER, "textField ...");
TextField field = new TextField();
field.setTextFormatter(fieldFormatter);
Label fieldValue = new Label();
fieldValue.textProperty().bind(fieldFormatter.valueProperty());
TextFormatter areaFormatter = new TextFormatter<>(
TextFormatter.IDENTITY\_STRING\_CONVERTER, "textArea ...");
TextArea area = new TextArea();
area.setTextFormatter(areaFormatter);
Label areaValue = new Label();
areaValue.textProperty().bind(areaFormatter.valueProperty());
HBox fields = new HBox(100, field, fieldValue);
BorderPane content = new BorderPane(area);
content.setTop(fields);
content.setBottom(areaValue);
return content;
}
```
Upvotes: 1
|
2018/03/16
| 404 | 1,374 |
<issue_start>username_0: In the code below, I would like to have the value of myvar be provided by a program argument.
```
#include
int main(int argc, const char \*\*argv)
{
const unsigned char myvar[] = "myvalue";
return 0;
}
```
How would I get myvar to contain the value of the string from argv[1]?<issue_comment>username_1: If you are only reading, then you can simply copy the address of `argv[1]` like this:
```
#include
#include
int main(int argc, const char \*\*argv) {
const unsigned char \*myvar = NULL;
// Be sure to check argc first
if (argc < 2) {
fprintf(stderr, "Not enough arguments.\n");
return EXIT\_FAILURE;
}
myvar = (const unsigned char \*)argv[1];
printf("myvar = %s\n", myvar);
}
```
If you want to change `myvar` then you should copy the string with `strncpy` or alike.
Upvotes: 3 [selected_answer]<issue_comment>username_2: An array cannot be initialized by a pointer or by another array. You can only initialize it with an initializer list or (in the char of a `char` array) a string constant.
What you can do it copy the contents of another string with `strcpy`. And since you'll be using this array as a parameter to an encryption function, it will probably need to be a fixed size.
```
char myvar[8] = { 0 }; // initialize all values to 0
strncpy(myvar, argv[1], 8); // copy the first 8 bytes
```
Upvotes: 0
|
2018/03/16
| 487 | 1,653 |
<issue_start>username_0: I’m making private messages in my website. Everything is okay but I can’t get the list of all the messages that the user sent or received.
I have 2 tables. `users(id,username,etc..)` and `messages (id,user1,user2,message,date)`
I’ve tried following to get all list messages for one user
>
> (SELECT DISTINCT \* from users WHERE user1 = $userid OR user2 = $userid
> ORDER BY date ASC)
>
>
>
but I can’t get the messages and I see duplicate values like
User1 User2
```
2 1
1 2
```
I want to get just one value for one relation<issue_comment>username_1: If you are only reading, then you can simply copy the address of `argv[1]` like this:
```
#include
#include
int main(int argc, const char \*\*argv) {
const unsigned char \*myvar = NULL;
// Be sure to check argc first
if (argc < 2) {
fprintf(stderr, "Not enough arguments.\n");
return EXIT\_FAILURE;
}
myvar = (const unsigned char \*)argv[1];
printf("myvar = %s\n", myvar);
}
```
If you want to change `myvar` then you should copy the string with `strncpy` or alike.
Upvotes: 3 [selected_answer]<issue_comment>username_2: An array cannot be initialized by a pointer or by another array. You can only initialize it with an initializer list or (in the char of a `char` array) a string constant.
What you can do it copy the contents of another string with `strcpy`. And since you'll be using this array as a parameter to an encryption function, it will probably need to be a fixed size.
```
char myvar[8] = { 0 }; // initialize all values to 0
strncpy(myvar, argv[1], 8); // copy the first 8 bytes
```
Upvotes: 0
|
2018/03/16
| 1,351 | 6,017 |
<issue_start>username_0: I am trying to figure out where to put the loop that when the user enters any value other than "rock", "paper" or "scissors" the program stays in the loop and displays `"Invalid entry"` while requesting the user to enter again.
Any help is much appreciated.
As it stands now, the program will display `"Invalid entry"` but then continues without asking the user to try again.
```
import java.util.Scanner; // Import the Scanner class
import java.util.Random; // Import the random class for game
/**
*/
public class Challenge17
{
// Method to determine the random choice of computer
public static String getComputerChoice(Random random)
{
int number;
number = random.nextInt(3) + 1;
String computerChoice;
switch (number)
{
case 1:
computerChoice = "rock";
break;
case 2:
computerChoice = "paper";
break;
case 3:
computerChoice = "scissors";
break;
default:
computerChoice = "";
}
return computerChoice;
}
// Method to display the menu for choices
public static void displayChoice( )
{
System.out.println("Game Options\n----------\n"
+ "1: rock\n2: paper\n3: scissors");
}
// Method to request and hold user choice for game
public static String getUserInput(Scanner keyboard)
{
String userInput;
System.out.println("Enter your choice: ");
userInput = keyboard.nextLine();
return userInput;
}
// Method to determine winner
public static String determineWinner(String computerChoice, String userInput)
{
String winner = "Tie Game!"; // Default display Tie game
String message = ""; // To determine the message for winner
String displayMessage; // To display the message for winner
// Custom messages below
String rockMessage = "Rock smashes scissors";
String scissorsMessage = "Scissors cuts paper";
String paperMessage = "Paper wraps rock";
boolean loop = false;
if(computerChoice.equals("rock") && userInput.equalsIgnoreCase("scissors"))
{
winner = " Computer wins!";
message = rockMessage;
loop = true;
}
else if (userInput.equalsIgnoreCase("rock") && computerChoice.equals("scissors"))
{
winner = "You win!";
message = rockMessage;
loop = true;
}
if(computerChoice.equals("scissors") && userInput.equalsIgnoreCase("paper"))
{
winner = " Computer wins!";
message = scissorsMessage;
loop = true;
}
else if (userInput.equalsIgnoreCase("scissors") && computerChoice.equals("paper"))
{
winner = "You win!";
message = scissorsMessage;
loop = true;
}
if(computerChoice.equals("paper") && userInput.equalsIgnoreCase("rock"))
{
winner = " Computer wins!";
message = paperMessage;
loop = true;
}
else if (userInput.equalsIgnoreCase("rock") && computerChoice.equals("scissors"))
{
winner = "You win!";
message = paperMessage;
loop = true;
}
else
{
System.out.println("Invalid entry.");
loop = false;
}
displayMessage = winner + " " + message;
return displayMessage;
}
// Main method to initiate and execute game
public static void main(String[] args)
{
Random random = new Random(); // To call the random class
Scanner keyboard = new Scanner(System.in); // To call the scanner class
String computerChoice; // Hold computer input
String userInput; // Hold user input
String input; // Hold input for repeat
char repeat; // Character for repeat
do
{
displayChoice(); // Call method to display the choices
computerChoice = getComputerChoice(random); // Hold the PC random choice
userInput = getUserInput(keyboard); // To get the user input
System.out.println("You chose: " + userInput + " computer chose: \n"
+ computerChoice);
System.out.println(determineWinner(computerChoice, userInput));
// Does the user want to play again
System.out.println("Would you like to play again?");
System.out.print("Enter Y for yes, or N for no: ");
input = keyboard.nextLine();
repeat = input.charAt(0);
}
while (repeat == 'Y' || repeat == 'y');
}
}
```<issue_comment>username_1: If you are only reading, then you can simply copy the address of `argv[1]` like this:
```
#include
#include
int main(int argc, const char \*\*argv) {
const unsigned char \*myvar = NULL;
// Be sure to check argc first
if (argc < 2) {
fprintf(stderr, "Not enough arguments.\n");
return EXIT\_FAILURE;
}
myvar = (const unsigned char \*)argv[1];
printf("myvar = %s\n", myvar);
}
```
If you want to change `myvar` then you should copy the string with `strncpy` or alike.
Upvotes: 3 [selected_answer]<issue_comment>username_2: An array cannot be initialized by a pointer or by another array. You can only initialize it with an initializer list or (in the char of a `char` array) a string constant.
What you can do it copy the contents of another string with `strcpy`. And since you'll be using this array as a parameter to an encryption function, it will probably need to be a fixed size.
```
char myvar[8] = { 0 }; // initialize all values to 0
strncpy(myvar, argv[1], 8); // copy the first 8 bytes
```
Upvotes: 0
|
2018/03/16
| 1,982 | 5,959 |
<issue_start>username_0: I have been at this for 2.5 hours now and been thru multiple posts. Tried several suggestions on <https://github.com/angular/angular/issues/15763>, which frustrated me as the source of the above error seems different.
I am on angular 5 and on build, I get the error:
>
> ERROR in : Unexpected value 'ToasterModule in .../node\_modules/angular5-toaster/dist/bundles/angular5-toaster.umd.js' imported by the module 'AppModule in ../src/app/app.module.ts'. Please add a @NgModule annotation.
>
>
>
the library don't have its own copy of node\_modules. Its own `tsconfig.js` file is:
```
{
"compilerOptions": {
"target": "es2015",
"module": "es2015",
"moduleResolution": "node",
"sourceMap": true,
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"removeComments": false,
"noImplicitAny": false,
"declaration": true,
"stripInternal": true,
"noUnusedLocals": false,
"types": [ "jasmine", "node" ],
"lib": ["es2015", "dom"]
},
"include": [
"angular5-toaster.ts",
"src/**/*"
],
"exclude": [
"node_modules",
"bundles"
]
}
```
angular5-toaster/package.json (partial):
```
"devDependencies": {
"@angular/animations": "4.1.0",
"@angular/common": "4.1.0",
"@angular/compiler": "4.1.0",
"@angular/compiler-cli": "4.1.0",
"@angular/core": "4.1.0",
"@angular/platform-browser": "4.1.0",
"@angular/platform-browser-dynamic": "4.1.0",
"@angular/platform-server": "4.1.0",
"@types/jasmine": "2.5.47",
"@types/node": "7.0.12",
"module": "angular5-toaster.js",
"name": "angular5-toaster",
"optionalDependencies": {},
"peerDependencies": {
"@angular/common": ">=4.0.0 <=5.0.2",
"@angular/compiler": ">=4.0.0 <=5.0.2",
"@angular/core": ">=4.0.0 <=5.0.2",
"rxjs": "^5.0.0-beta.11"
},
```
My app.module.ts:
```
import { NgModule} from '@angular/core';
import { CommonModule } from '@angular/common';
import { BrowserModule } from '@angular/platform-browser';
import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';
import { Ng4LoadingSpinnerModule } from 'ng4-loading-spinner';
import {ToasterModule, ToasterService} from 'angular5-toaster';
@NgModule({
imports: [
CommonModule,
BrowserModule,ToasterModule,
BrowserAnimationsModule,
Ng4LoadingSpinnerModule.forRoot(),
AppRoutingModule
],
declarations: [AppComponent],
providers: [ToasterService
],
bootstrap: [AppComponent, ]
})
export class AppModule {}
```
package.json:
```
{
"name": "RLRC",
"version": "4.0.0",
"license": "MIT",
"scripts": {
"ng": "ng",
"start": "ng serve --ec true",
"build": "ng build --prod",
"gitbuild": "ng build --prod --base /start-angular/RLRC/master/dist/",
"test": "ng test",
"lint": "ng lint",
"e2e": "ng e2e"
},
"private": true,
"dependencies": {
"@angular/animations": "^5.0.1",
"@angular/common": "^5.0.1",
"@angular/compiler": "^5.0.1",
"@angular/core": "^5.0.1",
"@angular/forms": "^5.0.1",
"@angular/http": "^5.0.1",
"@angular/platform-browser": "^5.0.1",
"@angular/platform-browser-dynamic": "^5.0.1",
"@angular/router": "^5.0.1",
"@ng-bootstrap/ng-bootstrap": "^1.0.0-beta.5",
"@ngx-translate/core": "^8.0.0",
"@ngx-translate/http-loader": "^2.0.0",
"chart.js": "^2.7.1",
"core-js": "^2.5.1",
"datatables.net": "^1.10.16",
"font-awesome": "^4.7.0",
"jquery": "^3.2.1",
"ng2-charts": "^1.6.0",
"ng4-loading-spinner": "^1.1.1",
"ngx-treeview": "^2.0.1",
"rxjs": "^5.5.2",
"yarn": "^1.3.2",
"zone.js": "^0.8.18"
},
"devDependencies": {
"@angular/cli": "1.5.0",
"@angular/compiler-cli": "^5.0.1",
"@angular/language-service": "^5.0.1",
"@types/jasmine": "~2.6.3",
"@types/jasminewd2": "~2.0.3",
"@types/jquery": "^3.2.17",
"@types/node": "~8.0.51",
"codelyzer": "~4.0.1",
"jasmine-core": "~2.8.0",
"jasmine-spec-reporter": "~4.2.1",
"karma": "~1.7.0",
"karma-chrome-launcher": "~2.2.0",
"karma-cli": "~1.0.1",
"karma-coverage-istanbul-reporter": "^1.2.1",
"karma-jasmine": "~1.1.0",
"karma-jasmine-html-reporter": "^0.2.2",
"protractor": "~5.2.0",
"ts-node": "~3.3.0",
"tslint": "~5.8.0",
"typescript": "~2.4.2"
}
}
```
I notice angular5-toaster (version 1.0.2) wasn't added to the list of dependence above but adding it didnt make any difference either.
What could be the problem? I would appreciate help as i might face same problem with other libraries.<issue_comment>username_1: You should not have ToasterService in the 'providers' section of your app module. The ToasterModule provides the service.
However, I don't think that's your entire problem.
I put a simple working example of using the library on StackBlitz, at <https://stackblitz.com/edit/angular-msm5nn>
Upvotes: 0 <issue_comment>username_2: This seems to be a particular problem with AoT builds and [angular5-toaster](https://www.npmjs.com/package/angular5-toaster).
I solved this by switching to [ng2-toastr](https://www.npmjs.com/package/ng2-toastr) which appears to be more stable and offer the same functionality.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Unfortunately, the accepted answer is convenient in the case when the use of the library is superficial, it is not deeply embedded in the project.
In my case, transferring from `angular5-toaster` to `ng2-toastr` would take much time. So I found another solution which can help others.
angular5-toaster suggests importing it as follow:
```
import {ToasterModule } from 'angular5-toaster';
```
I used direct approach:
```
import { ToasterModule } from './../../node_modules/angular5-toaster/angular5-toaster';
```
Upvotes: 0
|
2018/03/16
| 546 | 1,743 |
<issue_start>username_0: The dictionary will store the names of the movies, and for each movie the dictionary value will be a list of show times.
I could not figure it out. Is that correct???
```
movie A: 9,11,15 movie B: 10,14,6 movie C: 11,13,19
dictionary={"moviesA":0, "moviesB":0, "moviesC":0}
showingtimes={"moviesA": 9, "moviesA": 11, "moviesA": 15, "moviesA": 21, "moviesB" : 10, "moviesB": 14, "moviesB": 16,
"moviesC": 11, "moviesC":13, "moviesC":19, "moviesC": 20, "moviesC": 21, "movieC":22}
```<issue_comment>username_1: You should not have ToasterService in the 'providers' section of your app module. The ToasterModule provides the service.
However, I don't think that's your entire problem.
I put a simple working example of using the library on StackBlitz, at <https://stackblitz.com/edit/angular-msm5nn>
Upvotes: 0 <issue_comment>username_2: This seems to be a particular problem with AoT builds and [angular5-toaster](https://www.npmjs.com/package/angular5-toaster).
I solved this by switching to [ng2-toastr](https://www.npmjs.com/package/ng2-toastr) which appears to be more stable and offer the same functionality.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Unfortunately, the accepted answer is convenient in the case when the use of the library is superficial, it is not deeply embedded in the project.
In my case, transferring from `angular5-toaster` to `ng2-toastr` would take much time. So I found another solution which can help others.
angular5-toaster suggests importing it as follow:
```
import {ToasterModule } from 'angular5-toaster';
```
I used direct approach:
```
import { ToasterModule } from './../../node_modules/angular5-toaster/angular5-toaster';
```
Upvotes: 0
|
2018/03/16
| 1,001 | 3,445 |
<issue_start>username_0: This is a pretty readable chunk of code I think,
```
for i in range(100):
continue if i % 2 == 0
```
But it's not syntactically correct. We can do other nice things in Python like,
```
for i in things:
total += 3 if i % 2 == 0 else 1
```
Or maybe,
```
return a if b > a else c
```
Why can't we do a `continue if` statement?<issue_comment>username_1: The flow:
```
for i in range(100):
continue if i % 2 == 0
```
Would be equivalent to:
```
for i in range(1, 100, 2):
...
```
Or, more generically, to:
```
for i in range(100):
if i % 2 == 0:
continue
```
Python language designers have a history of voting against changes to the grammar which are only offering slightly different ways of doing the same thing ("There should be one obvious way to do it").
The type of one-liner construct which you've mentioned
```
x if cond else y
```
was an exception made, here. It was added to the language to offer a less error-prone way of achieving what many users were already *attempting* to achieve with `and` and `or` short-circuiting hacks ([source: Guido](https://mail.python.org/pipermail/python-dev/2005-September/056546.html)).
Code in the wild was using:
```
cond and x or y
```
Which is not logically equivalent, yet it's an easy mistake to make for users who were already familiar with the ternary `cond ? : x : y` syntax from C. A correct equivalent is:
```
(cond and [x] or [y])[0]
```
But, that's ugly. So, the rationale for the addition of an expression `x if cond else y` was stronger than a mere convenience.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Python does have such a thing, the syntax is just a bit different. Instead of the "if and "continue" being combined as one statement, they are separated into a conditional statement (if, while etc), and a control flow (continue, pass, break etc) if it evaluates to true. In your first code example, the syntax would be:
```
for i in range(100):
if i % 2 == 0:
continue
else:
#you could also add an else like this do something else if the
#number evaluated to odd
```
This will move on to the next iteration of the outer loop.There are also other helpful iteration tools like this, called "Control Flow Tools." I'll include a link to the Python docs that explain this. There's a ton of useful stuff there, so please do have a look.
Others here have also suggested a single-line syntax, which works too. It is, however, good to understand both approaches; this way you can keep your code as simple as possible, but you'll also have the ability to nest loops and conditions if your algorithm will benefit from it.
Happy coding!
<https://docs.python.org/3/tutorial/controlflow.html>
Upvotes: -1 <issue_comment>username_3: Because `x if cond else y` is actually an **expression**.
Expressions are statements which evaluate to a value, in this case, either `x` or `y`.
`continue` is not a value, so there's that. Also,
```
if cond:
continue
```
is really not much harder or more "error prone" than `continue if cond`, whereas `v = x if cond else y` is probably better than
```
if cond:
v = x
else:
v = y
```
There's also the fact that if we allowed `continue if cond`, we add a new way to use this `_ if cond` pattern, i.e. we allow it without an `else`.
For more info:
<https://docs.python.org/2.5/whatsnew/pep-308.html>
Upvotes: 2
|
2018/03/16
| 634 | 2,181 |
<issue_start>username_0: I made an environment on Cloud9 on AWS, then made a folder named "ruby\_projects", then inside that folder, I ran the command:
```
rails new todolist
```
then from inside the todolist folder, I ran
```
rails s
```
In the share button on the top right corner of the environment, I opened the application link which is 172.16.58.3, but instead of saying "you are on rails" it says:
```
Oops
Error: 1 validation error detected: Value '172.16.58.3' at 'envir..
```<issue_comment>username_1: For changing port on AWS you can do something like this:
```
sudo iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 3000
```
For local machine:
```
rails server -p 80
```
**But**, Phlip absolutely right - you should learn rails on local machine with development environment. Step by step.
Upvotes: 1 <issue_comment>username_2: You have two ways to preview applications on AWS Cloud9--through the preview URL (from clicking the Preview button) and from the public IP for the host (AKA the sharing URL). The Preview URL is a bit easier to run, but has a few limitations. Specifically:
* You need to serve your content on `127.0.0.1:8080` (ports `8081` and `8082` work as well but have to be specified)
* You can only access the URL when you are currently logged into the IDE and have the IDE open.
* Only IAM users with access to the IDE can access the Preview URL. For instance, this won't work if you are calling this endpoint from another program.
You can read more about the Preview URL here: <https://docs.aws.amazon.com/cloud9/latest/user-guide/app-preview.html#app-preview-preview-app>
If you need to share this to people who don't have access to the IDE or you need to access the endpoint through a different program, you'll want to use the Sharing URL. This requires a bit of additional configuration, specifically, you'll have to:
* Create a security group for the host that opens your selected ports to the main internet
* Run the server through `0.0.0.0` instead of `127.0.0.1`
You can see how to do this here: <https://docs.aws.amazon.com/cloud9/latest/user-guide/app-preview.html#app-preview-share>
Upvotes: 0
|
2018/03/16
| 1,782 | 5,009 |
<issue_start>username_0: i'm looking for a get color pallet from image.
i could get RGB data from image
```
getRgbData() {
this.canvas = window.document.createElement('canvas')
this.context = this.canvas.getContext('2d')
this.width = this.canvas.width = image.width || image.naturalWidth
this.height = this.canvas.height = image.height || image.naturalHeight
this.context.drawImage(image, 0, 0, this.width, this.height)
return this.context.getImageData(0, 0, this.width, this.height)
}
```
and convert RGB values to HSV model (rgbToHsv method wrote from <https://gist.github.com/mjackson/5311256#file-color-conversion-algorithms-js-L1>)
```
getHsvData() {
const { data, width, height } = this.getRgbData()
const pixcel = width * height
const q = 1
const array = []
for (var i = 0, r, g, b, offset; i < pixcel; i = i + q) {
offset = i * 4
r = data[offset + 0]
g = data[offset + 1]
b = data[offset + 2]
array.push({ r, g, b })
}
return array.map(l => this.rgbToHsv(l.r, l.g, l.b))
}
```
it result like this (it is converted data from RGB 24bit)
```
[
{h: 0.6862745098039215, s: 0.7727272727272727, v: 0.17254901960784313},
{h: 0.676470588235294, s: 0.723404255319149, v: 0.1843137254901961},
.....
]
```
[color-thief](https://github.com/lokesh/color-thief/blob/master/src/color-thief.js) and [vibrant.js](https://github.com/akfish/node-vibrant) is get dominant color from RGB model, but i want to
get dominant color from converted HSV model.
(i heard that extract color from hsv is more fit in human eyes. is it right?)
how can i extract color form HSV model..?<issue_comment>username_1: First thing we need to do is get the average color of the image. We can do that by adding each color channel individually then dividing by the height and the width of the canvas.
```
function channelAverages(data, width, height) {
let r = 0, g = 0, b = 0
let totalPixels = width * height
for (let i = 0, l = data.data.length; i < l; i += 4) {
r += data.data[i]
g += data.data[i + 1]
b += data.data[i + 2]
}
return {
r: Math.floor(r / totalPixels),
g: Math.floor(g / totalPixels),
b: Math.floor(b / totalPixels)
}
}
```
Next we will want to convert the returned color's average to HSL, we can do that with this function (Which you also link to above).
```
function rgbToHsl(r, g, b) {
r /= 255, g /= 255, b /= 255;
var max = Math.max(r, g, b), min = Math.min(r, g, b);
var h, s, l = (max + min) / 2;
if (max == min) {
h = s = 0; // achromatic
} else {
var d = max - min;
s = l > 0.5 ? d / (2 - max - min) : d / (max + min);
switch (max) {
case r: h = (g - b) / d + (g < b ? 6 : 0); break;
case g: h = (b - r) / d + 2; break;
case b: h = (r - g) / d + 4; break;
}
h /= 6;
}
return [h, s, l];
}
```
So, to get our output we can do this:
```
let data = ctx.getImageData(0, 0, canvas.width, canvas.height)
let avg = channelAverages(data, width, height)
console.log(rgbToHsl(avg.r, avg.g, avg.b))
```
If we want numbers we can use in an editor (Such as PhotoShop or Gimp) to verify our results, we just need to multiply each:
```
h = h * 360
Example: 0.08 * 360 = 28.8
s = s * 100
Example: 0.85 * 100 = 85
l = l * 100
Example: 0.32 * 100 = 32
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is a library called [Kleur.js](https://github.com/username_2-san-so/Kleur) you can use to get images but remember that it gives random color palette every time. But the dominant color will remain the same in every color pallete
```
// Create the Kleur Object
Kleur = new Kleur();
// Set the image link to get the palette from
imageObj = Kleur.init(imgLink);
// Wait for the image to load
imageObj.onload = function(e) {
// get the color array from the image
let colorArr = Kleur.getPixelArray(imageObj);
// pass the array to generate the color array
let array_of_pixels = Kleur.generateColorArray(colorArr);
// you can get the dominant color from the image
const dominant = Kleur.getDominant(array_of_pixels);
// log the light colors and the dominant color
console.log(light, dominant)
}
```
if you want to see an example for using this code visit the [codepen](https://codepen.io/username_2sama/pen/YzXjxew)
And if you want all the dominant colors which I think is the colors with the most pixels you can access the array\_of\_pixels, so you could do
```
// for first five dominant color
for(let x = 0; x < 5; x++){
console.log(array_of_pixels[x].hsv);
}
// for the dominant colors hsv value
console.log(dominant.hsv)
```
this will log the hsv values for the five most dominant color in the image(usually the dominant colors are really similar so look out for that)
Kleur js returns colors in various color space
* RGB
* HEX
* HSV
* XYZ
* LAB
* LCH
it also returns count which is the number of pixels that have the color
Upvotes: 0
|
2018/03/16
| 705 | 2,471 |
<issue_start>username_0: I am trying to find the sum of 5 text boxes in which the user inputs the data and outputs at the sixth text box.
I keep getting an error saying that the values aren't assigned yet I don't know how to assign them.
This is my code so far:
```
private void button1_Click(object sender, EventArgs e)
{
int value1 = 0;
int value2 = 0;
int value3 = 0;
int value4 = 0;
int value5 = 0;
int result = 0;
if (int.TryParse(textBox4.Text, out value1) & int.TryParse(textBox2.Text, out value2))
{
result = value1 + value2;
textBox21.Text = result.ToString();
}
}
```<issue_comment>username_1: try this:
```
private void button1_Click(object sender, EventArgs e)
{
int value1 = 0;
int value2 = 0;
int value3 = 0;
int value4 = 0;
int value5 = 0;
int result = 0;
if (int32.TryParse(textBox4.Text, out value1) & int32.TryParse(textBox2.Text, out value2))
{
result = value1 + value2;
textBox21.Text = result.ToString();
}
}
```
you can also do it simpler like this:
```
private void button1_Click(object sender, EventArgs e)
{
int value1 = 0;
int value2 = 0;
int value3 = 0;
int value4 = 0;
int value5 = 0;
int result = 0;
value1 = Convert.ToInt32(textBox4.Text);
value2 = Convert.ToInt32(textBox2.Text);
result = value1 + value2;
textBox21.Text = result.ToString();
}
```
Upvotes: -1 <issue_comment>username_2: Welcome to C#! Here's a bit of aid for your first time:
1. The default value of `int` is zero so you don't have to set it.
2. You have used the unary operator of `&` instead of the binary `&&`. This means the 'if' will still evaluate the second condition even if the first one returns false. If that is your desire, that works.
3. If `textbox2` or `textbox4` is empty, `result` remains zero but `textbox21.Text` will remain unset. If you have other logic elsewhere we don't see, this may be causing your problem.
You have the right idea using TryParse and I'd suggest continuing to use it, as Convert will throw exceptions if the value is either null or using an improper format. If you don't handle the exceptions, your program will crash (the typical solution is to wrap your conversion code in a try/catch block). If you want to post the exact error text you're receiving we can help further if needed.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 646 | 2,682 |
<issue_start>username_0: I've transferred my database and entire Wordpress file structure over to the live site, but the live site is still looking for all its resources at localhost:8888/.
I looked back on what I did when getting started and I edited my `gulpfile.js` to include
```
var browserSyncOptions = {
proxy: 'localhost:8888',
notify: false
};
```
Thinking this was the issue, I switched it to `proxy: $_SERVER['DOCUMENT_ROOT']`, but still no luck. Any ideas for what I may be doing wrong?
Unfortunately I haven't been able to find a single bit of information about deploying an understrap themed site. Here are the docs for the theme though: <https://understrap.com/demos/core/wp-content/themes/core-understrap/docs/>.<issue_comment>username_1: If you can access the wp-admin of your live site there's a great plugin called Velvet Blues Update URLs that you can install.
Install it, go to Tools > Update URLs and enter the localhost address and then the live site address.
Always works great for me.
Upvotes: 0 <issue_comment>username_2: Many developers have done this process by running its gulp dist when processing deployment. It will create dist folder which you can upload on the server. It contains all necessary files that supports running the scss. So try to use also the gulp dist-product as a "product" that the theme can be published and downloaded in the near future.
Upvotes: 0 <issue_comment>username_3: >
> I've transferred my database and entire Wordpress file structure over to the live site, but the live site is still looking for all its resources at localhost:8888/.
>
>
>
That is an easy to way to get into live server maintenance hell. UnderStrap is a unique theme package because it has strong npm dependencies running in the back-end that are appropriate for localhost development builds. For that reason, Here is an alternative approach I recommend instead.
1. Install a clean vanilla version of WordPress on your live server.
2. Install and activate UnderStrap.zip on your live server through wp-admin.
3. Import/Export your posts and pages from your local site, onto the live site. Add all media images. Adjust your settings, install your plugins, and clear your cache.
4. Now use the file manager in your C-Panel to only upload all **modified** theme files from your localhost to your live server.
5. Test your site, then go live.
* That is more of a manual process, but it has its benefits. For one, you learn and see what each step looks like, you push up only what you need, you don't have localhost decencies running on your live site and the database is 100% targeted on live site content.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 720 | 2,291 |
<issue_start>username_0: I have a question regarding Java streams. Let's say I have a stream of Object and I'd like to map each of these objects to multiple objects. For example something like
```
IntStream.range(0, 10).map(x -> (x, x*x, -x)) //...
```
Here I want to map each value to the same value, its square and the same value with opposite sign. I couldn't find any stream operation to do that. I was wondering if it would be better to map each object `x` to a custom object that has those fields, or maybe collect each value into intermediate `Map` (or any data structure).
I think that in terms of memory it could be better to create a custom object, but maybe I'm wrong.
In terms of design correctness and code clarity, which solution would be better? Or maybe there are more elegant solutions that I'm not aware of?<issue_comment>username_1: **Besides using a custom class** such as:
```
class Triple{
private Integer value;
public Triple(Integer value){
this.value = value;
}
public Integer getValue(){return this.value;}
public Integer getSquare(){return this.value*this.value;}
public Integer getOpposite(){return this.value*-1;}
public String toString() {return getValue()+", "+this.getSquare()+", "+this.getOpposite();}
}
```
and run
```
IntStream.range(0, 10)
.mapToObj(x -> new Triple(x))
.forEach(System.out::println);
```
you could use apache commons InmmutableTriple to do so.
for example:
```
IntStream.range(0, 10)
.mapToObj(x -> ImmutableTriple.of(x,x*x,x*-1))
.forEach(System.out::println);
```
maven repo: <https://mvnrepository.com/artifact/org.apache.commons/commons-lang3/3.6>
documentation: <http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/tuple/ImmutableTriple.html>
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can use `flatMap` to generate an `IntStream` containing 3 elements for each of the elements of the original `IntStream`:
```
System.out.println(Arrays.toString(IntStream.range(0, 10)
.flatMap(x -> IntStream.of(x, x*x, -x))
.toArray()));
```
Output:
```
[0, 0, 0, 1, 1, -1, 2, 4, -2, 3, 9, -3, 4, 16, -4, 5, 25, -5, 6, 36, -6, 7, 49, -7, 8, 64, -8, 9, 81, -9]
```
Upvotes: 2
|
2018/03/16
| 300 | 1,036 |
<issue_start>username_0: I'm getting some weird glitches when using the layout\_weight on my Android app. To make sure that I'm not wasting anybody's time. When I have three custom views and give two views a weight of 1 and one view a weight of 2. The view with a weight of 2 is supposed to be the biggest one, right? Because that ain't happening with me.
Couple of example's with behavior:
* 1:2:1 - view in the middle completely disappears.
* 1.1 : 1.1 : 1 - first two views are slightly smaller than third. (this is what I currently use to get the desired layout.)
Code:
```
```
Note: I tried giving the customviews a width and height of `0dp`, but than they completely disappear.<issue_comment>username_1: If you want to use layout\_weight in your views you must use weight\_sum in your parent View. For example:
```
```
Please notice that weight\_sum property is the result of 1.1+1.1+1 = 3.2
Upvotes: 0 <issue_comment>username_2: make height `0dp` and width `wrap_content` . good job B.Cakir.
Upvotes: 2 [selected_answer]
|