date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/16
| 216 | 761 |
<issue_start>username_0: ```
Map> aMap = new HashMap<>();
```
This map has some keys and optional values.
```
Optional> valuesList = input.aMap().values().stream()
.collect(Collectors.toList());
```
The above way has compilation error. How do i get the optional list correctly?<issue_comment>username_1: you missundertand the return value, dont forget that a `Optional>` is an optional object that can have 1 list if present....
you need instead a `List>`
```
List> valuesList = input.aMap()
.values()
.stream()
.collect(Collectors.toList());
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You don't even need to `stream` here, just to collect them to a `List`:
```
List> list = new ArrayList<>(input.aMap().values());
```
Upvotes: 2
|
2018/03/16
| 522 | 1,852 |
<issue_start>username_0: I'm new to asp.net
Here is my code:
```
public static string pathGetFile = WebConfigurationManager.AppSettings["pathGetFile"].ToString();
public bool TransferFile(string idcard)
{
//string file = "";
try
{
Sftp sftp = new Sftp(url, user);
sftp.AddIdentityFile(pathKey);
sftp.Connect(port);
string two_char = idcard.Substring(0, 2);
ArrayList filelist = sftp.GetFileList(pathGetFile);
bool is_twochar = false;
string temp = "";
foreach (var item in filelist)
{
temp += item.ToString() + "/n";
if (item.ToString() == two_char)
{
is_twochar = true;
}
}
```
the erorr occurs at "`ArrayList filelist = sftp.GetFileList(pathGetFile);`"
and the error is:
>
> `"Cannot implicitly convert type 'string[]' to 'System.Collections.ArrayList'"`
>
>
><issue_comment>username_1: You can just change `ArrayList filelist = sftp.GetFileList(pathGetFile);`
to `var filelist = sftp.GetFileList(pathGetFile);`
`sftp.GetFileList(pathGetFile)` must be returning `string[]` and since I see no need for the rest of the code to use ArrayList, you can just change it to `var` or `string[]`.
Upvotes: 1 <issue_comment>username_2: Dont use `ArrayList` there are almost no reason too. As the error is stating
>
> Cannot implicitly convert type 'string[]' to System.Collections.ArrayList
>
>
>
Why not just use
```
var filelist = sftp.GetFileList(pathGetFile);
```
This will create `filelist` as a `string[]` as opposed to an `ArrayList`.
Upvotes: 1 <issue_comment>username_3: Because `sftp.GetFileList(pathGetFile);` returns a `string[]`. A simple fix can either be to write `var filelist` or `string[] filelist` instead of `ArrayList`.
Upvotes: 0
|
2018/03/16
| 1,064 | 3,530 |
<issue_start>username_0: I have written 2 functions to encrypt and decrypt data with openssl
I get the same return value for my teststring if I use openssl\_encrypt directly or if I use my function.
The problem is the decryption. The decryption only works if I use openssl\_encrypt directly without the function.
If i use the function to encrypt the data I only get a empty response from the decryption.
My code is the following:
```
$key = base64_decode("PRIV KEY");
$cipher = "aes-256-gcm";
$iv = base64_decode("BASE64 encoded IV ");
$tag = base64_decode("BASE64 encoded TAG ");
function mc_encrypt($plaintext,$cipher, $key, $iv, $tag){
$encrypted_text = openssl_encrypt($plaintext, $cipher, $key, $options=0, $iv, $tag);
return $encrypted_text;
}
function mc_decrypt($encrypted_text ,$cipher, $key, $iv, $tag){
$decrypted_text = openssl_decrypt($encrypted_text, $cipher, $key, $options=0, $iv, $tag);
return $decrypted_text;
}
$test = "Teststring";
// Success Message is returned
$encrypted = openssl_encrypt($test, $cipher, $key, $options=0, $iv, $tag);
// Success Message is NOT returned
$encrypted = mc_encrypt($test, $cipher, $key, $iv, $tag);
$decrypted = mc_decrypt($encrypted, $cipher, $key, $iv, $tag);
if($decrypted == $test)
{
echo "Success!";
}
```<issue_comment>username_1: I know this is the old question but if someone is facing the same issue then try to change the $options parameter in the `openssl_decrypt` function that will return back you the decrypted data.
For example decryption in OpenSSL could be done by using the flag: OPENSSL\_ZERO\_PADDING
`openssl_decrypt($encrypted_text, $cipher, $key, OPENSSL_ZERO_PADDING, $iv);`
whereas flag: OPENSSL\_RAW\_DATA is mostly used to encrypt the plaintext in `openssl_encrypt`
Example:
`openssl_encrypt($data, $cipher, $key, OPENSSL_RAW_DATA, $iv);`
Upvotes: 0 <issue_comment>username_2: I also face same issue. This type of issue is coming due to lack of documentation in PHP library.
This is the issue of `$tag` which is generating depending on the `$plaintext`.
whatever value you assign in `$tag` will replace during encryption.
print `$tag` after encryption and keep it safe and during decryption use that `$tag` value. it will work fine.
Upvotes: 1 <issue_comment>username_3: You can use below methods to encrypt and decrypt data in php:
```
//Define cipher
$cipher = "AES-256-CBC";
//Generate a 256-bit encryption key
$key = "dhu-enc2022";
// Data Array key with parameter
$plaintext = json_encode(array("name"=>"<NAME>","email"=>"<EMAIL>","mobile"=>"1234567890"));
//For encryption
$ivlen=openssl_cipher_iv_length($cipher);
$iv=openssl_random_pseudo_bytes($ivlen);
$ciphertext_raw=openssl_encrypt($plaintext,$cipher,$key,$options=OPENSSL_RAW_DATA,$iv);
$hmac=hash_hmac('sha256',$ciphertext_raw,$key,$as_binary=true);
$ciphertext=base64_encode($iv.$hmac.$ciphertext_raw);
//For decrypt on the other end
$c=base64_decode($r->ciphertext);
$ivlen=openssl_cipher_iv_length($cipher);
$iv=substr($c,0,$ivlen);
$hmac=substr($c,$ivlen,$sha2len=32);
$ciphertext_raw=substr($c,$ivlen+$sha2len);
$original_plaintext=openssl_decrypt($ciphertext_raw,$cipher,$key,$options=OPENSSL_RAW_DATA,$iv);
$calcmac=hash_hmac('sha256',$ciphertext_raw,$key,$as_binary=true);
print_r($original_plaintext);
```
Reference Link:- [encryption and decryption in php](https://www.codegrepper.com/code-examples/whatever/php+openssl_encrypt)
Upvotes: 0
|
2018/03/16
| 724 | 2,368 |
<issue_start>username_0: I can't get TabNavigator to work on iOS, although it works perfectly in Android. Here are my steps to reproduce the problem
Open terminal window.
`react-native init tabnav`
`cd tabnav`
`rm -rf node_modules`
`rm -rf package.json`
`rm -rf package-lock.json`
Then I opened up `package.json` and pasted in the following contents:
```
{
"name": "tabnav",
"version": "0.0.1",
"private": true,
"scripts": {
"start": "node node_modules/react-native/local-cli/cli.js start",
"test": "jest"
},
"dependencies": {
"react": "16.0.0-beta.5",
"react-native": "0.49.3",
"react-navigation": "git+https://github.com/react-community/react-navigation.git"
},
"devDependencies": {
"babel-jest": "22.4.1",
"babel-preset-react-native": "4.0.0",
"jest": "22.4.2",
"react-test-renderer": "16.3.0-alpha.1"
},
"jest": {
"preset": "react-native"
}
```
Then I opened up `App.js` and pasted in the contents:
```
import React, { Component } from 'react';
import {TabNavigator} from 'react-navigation';
import {
Text,
View
} from 'react-native';
type Props = {};
class Page extends Component {
render() {
return (
Welcome to React Native!
);
}
}
const Navigator = TabNavigator({
Recent: {screen: Page},
Popular:{screen:Page}
}
);
export default Navigator;
```
Then I opened up the Xcode project, cleaned, and ran the project.
Then I get this error:
>
> undefined is not a function (near
> '...(0,\_reactNavigation.TabNavigator)...')
>
>
>
Why won't tab navigator work?
---
Note: if I replace all instances of tab navigator with stack navigator, the error goes away. But I need a tab navigator, not a stack navigator<issue_comment>username_1: Try to remove react-navigation package from package.json then install react-navigation. Hope it helps.
---
after doing this, you will notice that package.json will list a specific version of react-natigation as opposed to just a link to a repository
Upvotes: 2 [selected_answer]<issue_comment>username_2: In react-navigation above v2, a new TabNavigator is introducer, You can use `createMaterialTopTabNavigator` to create TabNavigator in top of your screens.
```
import {createMaterialTopTabNavigator} from "react-navigation";
const TabsAB = createMaterialTopTabNavigator({
Tab_A: {
screen: ScreenA,
....
```
Upvotes: 0
|
2018/03/16
| 285 | 1,050 |
<issue_start>username_0: I am really new to Google API key.
When am I testing my code on the local system, it is working fine but on an actual server it giving me `SSLHandshakeError`.
**Server configuration**: `nginx + unicorn + ubuntu + django + Python 2.7`
`SSLHandshakeError at 'url'
[Errno 1] _ssl.c:510: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed`<issue_comment>username_1: Try to remove react-navigation package from package.json then install react-navigation. Hope it helps.
---
after doing this, you will notice that package.json will list a specific version of react-natigation as opposed to just a link to a repository
Upvotes: 2 [selected_answer]<issue_comment>username_2: In react-navigation above v2, a new TabNavigator is introducer, You can use `createMaterialTopTabNavigator` to create TabNavigator in top of your screens.
```
import {createMaterialTopTabNavigator} from "react-navigation";
const TabsAB = createMaterialTopTabNavigator({
Tab_A: {
screen: ScreenA,
....
```
Upvotes: 0
|
2018/03/16
| 1,322 | 4,242 |
<issue_start>username_0: I am given a raw string which is a path or "direction" to a string in JSON.
I need the following string converted to a list containing dictionaries..
```
st = """data/policy/line[Type="BusinessOwners"]/risk/coverage[Type="FuelHeldForSale"]/id"""
```
The list should look like this
```
paths = ['data','policy','line',{'Type':'BusinessOwners'},'risk','coverage',{"Type":"FuelHeldForSale"},"id"]
```
I then iterate over this list to find the object in the JSON (which is in a Spark RDD)
I attempted `st.split(\)` which gave me
```
st.split('/')
Out[370]:
['data',
'policy',
'line[Type="BusinessOwners"]',
'risk',
'coverage[Type="FuelHeldForSale"]',
'CalculationDisplay']
```
But how do I convert and split items like `'line[Type="BusinessOwners"]'` to `'line',{'Type':'BusinessOwners'}` ?<issue_comment>username_1: Would be more efficient if it wasn't a 1 liner, but I'll let you figure it out from here. Probably wanna come up with a more robust regex based parsing engine if your input varies more than your given schema. Or just use a standardized data model like JSON.
```
[word if '=' not in word else {word.split('=')[0]:word.split('=')[1]} for word in re.split('[/\[]', st.replace(']','').replace('"',''))]
```
>
> ['data', 'policy', 'line', {'Type': 'BusinessOwners'}, 'risk',
> 'coverage', {'Type': 'FuelHeldForSale'}, 'id']
>
>
>
Upvotes: 1 <issue_comment>username_2: ```
import json
first_list = st.replace('[', '/{"').replace(']', '}').replace('="', '": "').split('/')
[item if not "{" in item else json.loads(item) for item in first_list]
```
or using `ast.literal_eval`
```
import ast
[item if not "{" in item else ast.literal_eval(item) for item in first_list]
out:
['data',
'policy',
'line',
{'Type': 'BusinessOwners'},
'risk',
'coverage',
{'Type': 'FuelHeldForSale'},
'id']
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: [Regular expressions](https://docs.python.org/3/howto/regex.html) may be a good tool here. It looks like you want to transform elements that look like `text1[text2="text3"]` with `text1, {text2: text3}. The regex would look something like this:
```
(\w+)\[(\w+)=\"(\w+)\"\]
```
You can modify this expression in any number of ways. For example, you could use something other than `\w+` for the names, and insert `\s*` to allow optional whitespace wherever you want.
The next thing to keep in mind is that when you do find a match, you need to expand your list. The easiest way to do that would be to just create a new list and [append/extend](https://docs.python.org/3/tutorial/datastructures.html#more-on-lists) it:
```
import re
paths = []
pattern = re.compile(r'(\w+)\[(\w+)=\"(\w+)\"\]')
for item in st.split('/'):
match = pattern.fullmatch(item)
if match:
paths.append(match.group(1))
paths.append({match.group(2): match.group(3)})
else:
paths.append(item)
```
This makes a `paths` that is
```
['data', 'policy', 'line', {'Type': 'BusinessOwners'}, 'risk', 'coverage', {'Type': 'FuelHeldForSale'}, 'id']
```
[[IDEOne Link]](https://ideone.com/CaIEkl)
I personally like to split the functionality of my code into pipelines of functions. In this case, I would have the main loop accumulate the `paths` list based on a function that returned replacements for the split elements:
```
def get_replacement(item):
match = pattern.fullmatch(item)
if match:
return match.group(1), {match.group(2): match.group(3)}
return item,
paths = []
for item in st.split('/'):
paths.extend(get_replacement(item))
```
The comma in `return item,` is very important. It makes the return value into a tuple, so you can use `extend` on whatever the function returns.
[[IDEOne Link]](https://ideone.com/cs7s9P)
Upvotes: 0 <issue_comment>username_4: Let's do it in one line :
```
import re
pattern=r'(?<=Type=)\"(\w+)'
data="""data/policy/line[Type="BusinessOwners"]/risk/coverage[Type="FuelHeldForSale"]/id"""
print([{'Type':re.search(pattern,i).group().replace('"','')} if '=' in i else i for i in re.split('\/|\[',data)])
```
output:
```
['data', 'policy', 'line', {'Type': 'BusinessOwners'}, 'risk', 'coverage', {'Type': 'FuelHeldForSale'}, 'id']
```
Upvotes: 0
|
2018/03/16
| 4,305 | 21,400 |
<issue_start>username_0: Im using update after a mutation to update the store when a new comment is created. I also have a subscription for comments on this page.
Either one of these methods works as expected by itself. However when I have both, then the user who created the comment will see the comment on the page twice and get this error from React:
```
Warning: Encountered two children with the same key,
```
I think the reason for this is the mutation update and the subscription both return a new node, creating a duplicate entry. Is there a recommended solution to this? I couldn’t see anything in the Apollo docs but it doesn’t seem like that much of an edge use case to me.
This is the component with my subscription:
```
import React from 'react';
import { graphql, compose } from 'react-apollo';
import gql from 'graphql-tag';
import Comments from './Comments';
import NewComment from './NewComment';
import _cloneDeep from 'lodash/cloneDeep';
import Loading from '../Loading/Loading';
class CommentsEventContainer extends React.Component {
_subscribeToNewComments = () => {
this.props.COMMENTS.subscribeToMore({
variables: {
eventId: this.props.eventId,
},
document: gql`
subscription newPosts($eventId: ID!) {
Post(
filter: {
mutation_in: [CREATED]
node: { event: { id: $eventId } }
}
) {
node {
id
body
createdAt
event {
id
}
author {
id
}
}
}
}
`,
updateQuery: (previous, { subscriptionData }) => {
// Make vars from the new subscription data
const {
author,
body,
id,
__typename,
createdAt,
event,
} = subscriptionData.data.Post.node;
// Clone store
let newPosts = _cloneDeep(previous);
// Add sub data to cloned store
newPosts.allPosts.unshift({
author,
body,
id,
__typename,
createdAt,
event,
});
// Return new store obj
return newPosts;
},
});
};
_subscribeToNewReplies = () => {
this.props.COMMENT_REPLIES.subscribeToMore({
variables: {
eventId: this.props.eventId,
},
document: gql`
subscription newPostReplys($eventId: ID!) {
PostReply(
filter: {
mutation_in: [CREATED]
node: { replyTo: { event: { id: $eventId } } }
}
) {
node {
id
replyTo {
id
}
body
createdAt
author {
id
}
}
}
}
`,
updateQuery: (previous, { subscriptionData }) => {
// Make vars from the new subscription data
const {
author,
body,
id,
__typename,
createdAt,
replyTo,
} = subscriptionData.data.PostReply.node;
// Clone store
let newPostReplies = _cloneDeep(previous);
// Add sub data to cloned store
newPostReplies.allPostReplies.unshift({
author,
body,
id,
__typename,
createdAt,
replyTo,
});
// Return new store obj
return newPostReplies;
},
});
};
componentDidMount() {
this._subscribeToNewComments();
this._subscribeToNewReplies();
}
render() {
if (this.props.COMMENTS.loading || this.props.COMMENT_REPLIES.loading) {
return ;
}
const { eventId } = this.props;
const comments = this.props.COMMENTS.allPosts;
const replies = this.props.COMMENT_REPLIES.allPostReplies;
const { user } = this.props.COMMENTS;
const hideNewCommentForm = () => {
if (this.props.hideNewCommentForm === true) return true;
if (!user) return true;
return false;
};
return (
{!hideNewCommentForm() && (
)}
);
}
}
const COMMENTS = gql`
query allPosts($eventId: ID!) {
user {
id
}
allPosts(filter: { event: { id: $eventId } }, orderBy: createdAt_DESC) {
id
body
createdAt
author {
id
}
event {
id
}
}
}
`;
const COMMENT_REPLIES = gql`
query allPostReplies($eventId: ID!) {
allPostReplies(
filter: { replyTo: { event: { id: $eventId } } }
orderBy: createdAt_DESC
) {
id
replyTo {
id
}
body
createdAt
author {
id
}
}
}
`;
const CommentsEventContainerExport = compose(
graphql(COMMENTS, {
name: 'COMMENTS',
}),
graphql(COMMENT_REPLIES, {
name: 'COMMENT_REPLIES',
}),
)(CommentsEventContainer);
export default CommentsEventContainerExport;
```
And here is the NewComment component:
```
import React from 'react';
import { compose, graphql } from 'react-apollo';
import gql from 'graphql-tag';
import './NewComment.css';
import UserPic from '../UserPic/UserPic';
import Loading from '../Loading/Loading';
class NewComment extends React.Component {
constructor(props) {
super(props);
this.state = {
body: '',
};
this.handleChange = this.handleChange.bind(this);
this.handleSubmit = this.handleSubmit.bind(this);
this.onKeyDown = this.onKeyDown.bind(this);
}
handleChange(e) {
this.setState({ body: e.target.value });
}
onKeyDown(e) {
if (e.keyCode === 13) {
e.preventDefault();
this.handleSubmit();
}
}
handleSubmit(e) {
if (e !== undefined) {
e.preventDefault();
}
const { groupOrEvent } = this.props;
const authorId = this.props.USER.user.id;
const { body } = this.state;
const { queryToUpdate } = this.props;
const fakeId = '-' + Math.random().toString();
const fakeTime = new Date();
if (groupOrEvent === 'group') {
const { locationId, groupId } = this.props;
this.props.CREATE_GROUP_COMMENT({
variables: {
locationId,
groupId,
body,
authorId,
},
optimisticResponse: {
__typename: 'Mutation',
createPost: {
__typename: 'Post',
id: fakeId,
body,
createdAt: fakeTime,
reply: null,
event: null,
group: {
__typename: 'Group',
id: groupId,
},
location: {
__typename: 'Location',
id: locationId,
},
author: {
__typename: 'User',
id: authorId,
},
},
},
update: (proxy, { data: { createPost } }) => {
const data = proxy.readQuery({
query: queryToUpdate,
variables: {
groupId,
locationId,
},
});
data.allPosts.unshift(createPost);
proxy.writeQuery({
query: queryToUpdate,
variables: {
groupId,
locationId,
},
data,
});
},
});
} else if (groupOrEvent === 'event') {
const { eventId } = this.props;
this.props.CREATE_EVENT_COMMENT({
variables: {
eventId,
body,
authorId,
},
optimisticResponse: {
__typename: 'Mutation',
createPost: {
__typename: 'Post',
id: fakeId,
body,
createdAt: fakeTime,
reply: null,
event: {
__typename: 'Event',
id: eventId,
},
author: {
__typename: 'User',
id: authorId,
},
},
},
update: (proxy, { data: { createPost } }) => {
const data = proxy.readQuery({
query: queryToUpdate,
variables: { eventId },
});
data.allPosts.unshift(createPost);
proxy.writeQuery({
query: queryToUpdate,
variables: { eventId },
data,
});
},
});
}
this.setState({ body: '' });
}
render() {
if (this.props.USER.loading) return ;
return (
Submit
);
}
}
const USER = gql`
query USER {
user {
id
}
}
`;
const CREATE_GROUP_COMMENT = gql`
mutation CREATE_GROUP_COMMENT(
$body: String!
$authorId: ID!
$locationId: ID!
$groupId: ID!
) {
createPost(
body: $body
authorId: $authorId
locationId: $locationId
groupId: $groupId
) {
id
body
author {
id
}
createdAt
event {
id
}
group {
id
}
location {
id
}
reply {
id
replyTo {
id
}
}
}
}
`;
const CREATE_EVENT_COMMENT = gql`
mutation CREATE_EVENT_COMMENT($body: String!, $eventId: ID!, $authorId: ID!) {
createPost(body: $body, authorId: $authorId, eventId: $eventId) {
id
body
author {
id
}
createdAt
event {
id
}
}
}
`;
const NewCommentExport = compose(
graphql(CREATE_GROUP_COMMENT, {
name: 'CREATE_GROUP_COMMENT',
}),
graphql(CREATE_EVENT_COMMENT, {
name: 'CREATE_EVENT_COMMENT',
}),
graphql(USER, {
name: 'USER',
}),
)(NewComment);
export default NewCommentExport;
```
And the full error message is:
```
Warning: Encountered two children with the same key, `<KEY>`. Keys should be unique so that components maintain their identity across updates. Non-unique keys may cause children to be duplicated and/or omitted — the behavior is unsupported and could change in a future version.
in ul (at Comments.js:9)
in Comments (at CommentsEventContainer.js:157)
in CommentsEventContainer (created by Apollo(CommentsEventContainer))
in Apollo(CommentsEventContainer) (created by Apollo(Apollo(CommentsEventContainer)))
in Apollo(Apollo(CommentsEventContainer)) (at EventPage.js:110)
in section (at EventPage.js:109)
in DocumentTitle (created by SideEffect(DocumentTitle))
in SideEffect(DocumentTitle) (at EventPage.js:51)
in EventPage (created by Apollo(EventPage))
in Apollo(EventPage) (at App.js:176)
in Route (at App.js:171)
in Switch (at App.js:94)
in div (at App.js:93)
in main (at App.js:80)
in Router (created by BrowserRouter)
in BrowserRouter (at App.js:72)
in App (created by Apollo(App))
in Apollo(App) (at index.js:90)
in QueryRecyclerProvider (created by ApolloProvider)
in ApolloProvider (at index.js:89)
```<issue_comment>username_1: I stumbled upon the same problem and did not find an easy and clean solution.
What i did was using the filter functionality of the subscription resolver on the server. You can follow this [tutorial](https://dev-blog.apollodata.com/tutorial-graphql-subscriptions-server-side-e51c32dc2951) which describes how to set up the server and this [tutorial](https://dev-blog.apollodata.com/tutorial-graphql-subscriptions-client-side-40e185e4be76) for the client.
In short:
* Add some kind of browser session id. May it be the JWT token or some other unique key (e.g. UUID) as a query
```js
type Query {
getBrowserSessionId: ID!
}
Query: {
getBrowserSessionId() {
return 1; // some uuid
},
}
```
* Get it on the client and e.g. save it to the local storage
```js
...
if (!getBrowserSessionIdQuery.loading) {
localStorage.setItem("browserSessionId", getBrowserSessionIdQuery.getBrowserSessionId);
}
...
const getBrowserSessionIdQueryDefinition = gql`
query getBrowserSessionId {
getBrowserSessionId
}
`;
const getBrowserSessionIdQuery = graphql(getBrowserSessionIdQueryDefinition, {
name: "getBrowserSessionIdQuery"
});
...
```
* Add a subscription type with a certain id as parameter on the server
```js
type Subscription {
messageAdded(browserSessionId: ID!): Message
}
```
* On the resolver add a filter for the browser session id
```js
import { withFilter } from ‘graphql-subscriptions’;
...
Subscription: {
messageAdded: {
subscribe: withFilter(
() => pubsub.asyncIterator(‘messageAdded’),
(payload, variables) => {
// do not update the browser with the same sessionId with which the mutation is performed
return payload.browserSessionId !== variables.browserSessionId;
}
)
}
}
```
* When you add the subscription to the query you add the browser session id as parameter
```js
...
const messageSubscription= gql`
subscription messageAdded($browserSessionId: ID!) {
messageAdded(browserSessionId: $browserSessionId) {
// data from message
}
}
`
...
componentWillMount() {
this.props.data.subscribeToMore({
document: messagesSubscription,
variables: {
browserSessionId: localStorage.getItem("browserSessionId"),
},
updateQuery: (prev, {subscriptionData}) => {
// update the query
}
});
}
```
* On the mutation on the server you also add the browser session id as parameter
```js
`Mutation {
createMessage(message: MessageInput!, browserSessionId: ID!): Message!
}`
...
createMessage: (_, { message, browserSessionId }) => {
const newMessage ...
...
pubsub.publish(‘messageAdded’, {
messageAdded: newMessage,
browserSessionId
});
return newMessage;
}
```
* When you call the mutation you add the browser session id from the local storage and perform the updating of the query in the update functionality. Now the query should update from the mutation on the browser where the mutation is send and update on the others from the subscription.
```js
const createMessageMutation = gql`
mutation createMessage($message: MessageInput!, $browserSessionId: ID!) {
createMessage(message: $message, browserSessionId: $browserSessionId) {
...
}
}
`
...
graphql(createMessageMutation, {
props: ({ mutate }) => ({
createMessage: (message, browserSessionId) => {
return mutate({
variables: {
message,
browserSessionId,
},
update: ...,
});
},
}),
});
...
_onSubmit = (message) => {
const browserSessionId = localStorage.getItem("browserSessionId");
this.props.createMessage(message, browserSessionId);
}
```
Upvotes: 1 <issue_comment>username_2: This is actually pretty easy to fix. I was confused for a long time as my subscriptions would intermittently fail. It turns out this was a Graphcool issue, switching from the Asian to the USA cluster stoped the flakiness.
You just have to test to see if the ID already exists in the store, and not add it if it does. Ive added code comments where I've changed the code:
```js
_subscribeToNewComments = () => {
this.props.COMMENTS.subscribeToMore({
variables: {
eventId: this.props.eventId,
},
document: gql`
subscription newPosts($eventId: ID!) {
Post(
filter: {
mutation_in: [CREATED]
node: { event: { id: $eventId } }
}
) {
node {
id
body
createdAt
event {
id
}
author {
id
}
}
}
}
`,
updateQuery: (previous, { subscriptionData }) => {
const {
author,
body,
id,
__typename,
createdAt,
event,
} = subscriptionData.data.Post.node;
let newPosts = _cloneDeep(previous);
// Test to see if item is already in the store
const idAlreadyExists =
newPosts.allPosts.filter(item => {
return item.id === id;
}).length > 0;
// Only add it if it isn't already there
if (!idAlreadyExists) {
newPosts.allPosts.unshift({
author,
body,
id,
__typename,
createdAt,
event,
});
return newPosts;
}
},
});
};
_subscribeToNewReplies = () => {
this.props.COMMENT_REPLIES.subscribeToMore({
variables: {
eventId: this.props.eventId,
},
document: gql`
subscription newPostReplys($eventId: ID!) {
PostReply(
filter: {
mutation_in: [CREATED]
node: { replyTo: { event: { id: $eventId } } }
}
) {
node {
id
replyTo {
id
}
body
createdAt
author {
id
}
}
}
}
`,
updateQuery: (previous, { subscriptionData }) => {
const {
author,
body,
id,
__typename,
createdAt,
replyTo,
} = subscriptionData.data.PostReply.node;
let newPostReplies = _cloneDeep(previous);
// Test to see if item is already in the store
const idAlreadyExists =
newPostReplies.allPostReplies.filter(item => {
return item.id === id;
}).length > 0;
// Only add it if it isn't already there
if (!idAlreadyExists) {
newPostReplies.allPostReplies.unshift({
author,
body,
id,
__typename,
createdAt,
replyTo,
});
return newPostReplies;
}
},
});
};
```
Upvotes: 4 [selected_answer]
|
2018/03/16
| 910 | 3,140 |
<issue_start>username_0: I have a case where i got a results file with the following pattern:
```
path:pattern found
```
for example
```
./user/home/file1:this is a game
```
in other words when i searched for some string i got the file and the line it found it.
Problem is sometimes i have multiple cases in the same file so i would like to remove the duplicates files (the cases would be different so it's not possible).
Any help or ideas are appreciated :)
End results is to turn this:
```
/user/home/desktop/file1:this is a game
/user/home/desktop/file1:what kind of game
/user/home/desktop/file1:fast action game
```
into just the first results found without losing all the rest of the data in the file.
**Update1:**
So the actual file looks like this:
```
/user/home/desktop/file1:this is a game
/user/home/desktop/file1:what kind of game
/user/home/desktop/file1:fast action game
/user/home/desktop/file2:a game
/user/home/desktop/file3:of game
/user/home/desktop/file4:fast game
```
i'm looking to get rid of the multiple occurrences in the same file so it should look like this:
```
/user/home/desktop/file1:this is a game
/user/home/desktop/file2:a game
/user/home/desktop/file3:of game
/user/home/desktop/file4:fast game
```<issue_comment>username_1: You could use `sort -u`:
```
grep pattern files | sort -t: -u -k1,1
```
* `-t:` - use : as the delimiter
* `-k1,1` - sort based on the first field only
* `-u` - removed duplicates (based on the first field)
This will retain just one occurrence of files, removing any duplicates.
For your example, this is the output you get:
```
/user/home/desktop/file1:this is a game
```
In case you are looking for multiple distinct matches with a file, then:
```
grep pattern files | sort -u
```
Upvotes: 7 [selected_answer]<issue_comment>username_2: In case raw file names are sufficient, one can use `grep pattern -l`, where `-l` option is documented as "print only names of FILEs with selected lines". But it turns out that each file is printed only once, even when multiple lines are matched inside.
The resulting output in your case would be:
```
/user/home/desktop/file2
/user/home/desktop/file3
/user/home/desktop/file4
```
Upvotes: 0 <issue_comment>username_3: Are you aware of the multiplicity switch in `grep`? This is an excerpt from the manpage:
```
-m NUM, --max-count=NUM
Stop reading a file after NUM matching lines. If the input is standard input from a regular file, and
NUM matching lines are output, grep ensures that the standard input is positioned to just after the
last matching line before exiting, regardless of the presence of trailing context lines. This enables
a calling process to resume a search. When grep stops after NUM matching lines, it outputs any
trailing context lines. When the -c or --count option is also used, grep does not output a count
greater than NUM. When the -v or --invert-match option is also used, grep stops after outputting NUM
non-matching lines.
```
So, using `grep -m 1 "pattern" files` you can limit the amount of results per file to one.
Upvotes: 1
|
2018/03/16
| 726 | 2,512 |
<issue_start>username_0: In Vue.js project, how can I get the `csrftoken`?
I tried use `js-cookie`, but can not get it:
```
import Cookies from 'js-cookie';
if (Cookies.get('csrftoken')!==undefined) { // there will skip, because the Cookies.get('csrftoken') is undefined.
config.headers['x-csrftoken']= Cookies.get('csrftoken'); // 'CSRFToken '
}
```
but I can get other cookie.
---
**EDIT**
```
Cookies.get('csrftoken')
```
this code get `undefined`.
But when I access, there is the `csrftoken`.
[](https://i.stack.imgur.com/0GUAr.jpg)<issue_comment>username_1: You could use `sort -u`:
```
grep pattern files | sort -t: -u -k1,1
```
* `-t:` - use : as the delimiter
* `-k1,1` - sort based on the first field only
* `-u` - removed duplicates (based on the first field)
This will retain just one occurrence of files, removing any duplicates.
For your example, this is the output you get:
```
/user/home/desktop/file1:this is a game
```
In case you are looking for multiple distinct matches with a file, then:
```
grep pattern files | sort -u
```
Upvotes: 7 [selected_answer]<issue_comment>username_2: In case raw file names are sufficient, one can use `grep pattern -l`, where `-l` option is documented as "print only names of FILEs with selected lines". But it turns out that each file is printed only once, even when multiple lines are matched inside.
The resulting output in your case would be:
```
/user/home/desktop/file2
/user/home/desktop/file3
/user/home/desktop/file4
```
Upvotes: 0 <issue_comment>username_3: Are you aware of the multiplicity switch in `grep`? This is an excerpt from the manpage:
```
-m NUM, --max-count=NUM
Stop reading a file after NUM matching lines. If the input is standard input from a regular file, and
NUM matching lines are output, grep ensures that the standard input is positioned to just after the
last matching line before exiting, regardless of the presence of trailing context lines. This enables
a calling process to resume a search. When grep stops after NUM matching lines, it outputs any
trailing context lines. When the -c or --count option is also used, grep does not output a count
greater than NUM. When the -v or --invert-match option is also used, grep stops after outputting NUM
non-matching lines.
```
So, using `grep -m 1 "pattern" files` you can limit the amount of results per file to one.
Upvotes: 1
|
2018/03/16
| 759 | 2,836 |
<issue_start>username_0: I am running it with Rails-React to return the data from database (mongoDB). Had successfully connect with the DB. However, it returns error "TypeError: home\_fires is undefined", please help...
```
constructor(props) {
super(props);
let home_fires = type.string;
this.state = {
error: null,
isLoaded: false,
home_fires: []
};
}
componentDidMount() {
fetch("http://localhost:3000/api/home_fires")
.then(res => res.json())
.then(
(results) => {
this.setState({
isLoaded: true,
home_fires: results.home_fires
});
},
(error) => {
this.setState({
isLoaded: true,
error
});
}
)
}
```
Here is how I render the data,`{this.props.home_fires.map(this.homefire)} ( ...`<issue_comment>username_1: You could use `sort -u`:
```
grep pattern files | sort -t: -u -k1,1
```
* `-t:` - use : as the delimiter
* `-k1,1` - sort based on the first field only
* `-u` - removed duplicates (based on the first field)
This will retain just one occurrence of files, removing any duplicates.
For your example, this is the output you get:
```
/user/home/desktop/file1:this is a game
```
In case you are looking for multiple distinct matches with a file, then:
```
grep pattern files | sort -u
```
Upvotes: 7 [selected_answer]<issue_comment>username_2: In case raw file names are sufficient, one can use `grep pattern -l`, where `-l` option is documented as "print only names of FILEs with selected lines". But it turns out that each file is printed only once, even when multiple lines are matched inside.
The resulting output in your case would be:
```
/user/home/desktop/file2
/user/home/desktop/file3
/user/home/desktop/file4
```
Upvotes: 0 <issue_comment>username_3: Are you aware of the multiplicity switch in `grep`? This is an excerpt from the manpage:
```
-m NUM, --max-count=NUM
Stop reading a file after NUM matching lines. If the input is standard input from a regular file, and
NUM matching lines are output, grep ensures that the standard input is positioned to just after the
last matching line before exiting, regardless of the presence of trailing context lines. This enables
a calling process to resume a search. When grep stops after NUM matching lines, it outputs any
trailing context lines. When the -c or --count option is also used, grep does not output a count
greater than NUM. When the -v or --invert-match option is also used, grep stops after outputting NUM
non-matching lines.
```
So, using `grep -m 1 "pattern" files` you can limit the amount of results per file to one.
Upvotes: 1
|
2018/03/16
| 854 | 2,115 |
<issue_start>username_0: I have two different type of image with various sizes inside the two div tags. I need to resize the 2nd div image size by half of first div image.
```css
.first-div img {
width:500px; //this img size may be random
}
.second-div img {
width:250px;//need to resize the img half of first-div image
}
```
```html


```
For example if the first div image size 200 and the second div image size 100. anyone help me to achieve this.<issue_comment>username_1: By using `Jquery`
```js
$('.second-div img').width($('.first-div img').width()/2);
```
```html


```
You can achieve this by using `css variables`
```css
:root {
--main-width: 500px;
}
img {
vertical-align: middle;
border-style: none;
}
.first-div img {
width:var(--main-width);
}
.second-div img {
width:calc(var(--main-width)/2);
}
```
```html


```
Upvotes: 2 [selected_answer]<issue_comment>username_2: With jQuery you can do
`$('.second-div img').width($('.first-div img').width()/2)`
Upvotes: 0 <issue_comment>username_3: You can achieve this by using jQuery like this:
```
var img = new Image();
img.onload = function() {
alert(this.width + 'x' + this.height);
var width = this.width;
$('.second-div img').css('width',width/2);
}
var url = $('.first-div').find('img').attr("src"); // get dynamically url image
img.src = url;
or
img.src = 'https://i.pinimg.com/originals/99/68/8e/99688e0cebdb1f8bde066b9bbf969003.jpg';//image_path_of_first_div_image;
```
Here you get width and height of image dynamically by passing image URL.
Upvotes: 1
|
2018/03/16
| 924 | 3,030 |
<issue_start>username_0: I am trying to understand PCF concepts and thinking that once i am done with creating mysql services in PCF, how i can manage that database like creating tables and maintaining that table just like we do in pur traditional environment using mySqldeveoper. I came across one service like PivotalMySQLWeb and tried but didnt liked it much. So if somehow i can get connection details of mysql service , i can use that to connect using sql developer.<issue_comment>username_1: The links @khalid mentioned are definitely good.
<http://docs.pivotal.io/p-mysql/2-0/use.html>
<https://github.com/andreasf/cf-mysql-plugin#usage>
More generally, you can use an SSH tunnel to access any service, not just MySQL. This also allows you to use whatever tool you would like to access the service.
This is documented [here](https://docs.pivotal.io/pivotalcf/1-9/devguide/deploy-apps/ssh-services.html), but if for some reason that goes away here are the steps.
1. Create your target service instance, if you don't have one already.
2. Push an app, any app. It really doesn't matter, it can be a hello world app. The app doesn't even need to use the service. We just need something to connect to.
3. Either Bind the service from #1 to the app in #2 or create a service key using the service from #1. If you bind to the app, run `cf env` or if you use a service key run `cf service-key MY-DB EXTERNAL-ACCESS-KEY` and either one will give you your service credentials.
4. Run `cf ssh -L 63306:us-cdbr-iron-east-01.p-mysql.net:3306 YOUR-HOST-APP`, where `63306` is the local port you'll connect to on your machine and `us-cdbr-iron-east-01.p-mysql.net:3306` are the host and port from the credentials in step #3.
5. The tunnel is now up, use whatever client you'd like to connect to your service. For example: `mysql -u b5136e448be920 -h localhost -p -D ad_b2fca6t49704585d -P 63306`, where `b5136e448be920` and `ad_b2fca6t49704585d` are the username and database name from step #3 and 63306 is the local port you picked from step #4.
Upvotes: 1 [selected_answer]<issue_comment>username_2: Additionally, if you want to connect aws-rds-mysql (instantiated from Pivotal Cloud Foundry) from IntelliJ, you can use the DB-Navigator Plugin (<https://plugins.jetbrains.com/plugin/1800-database-navigator>) inside IntelliJ, through which, database manipulation can be performed.
After creating the ssh tunnel `$ cf ssh -L 63306::3306 YOUR-HOST-APP` (as also mentioned in <https://docs.pivotal.io/pivotalcf/2-4/devguide/deploy-apps/ssh-services.html>),
* Go to DB Navigator plugin and click on custom under new connection.
* Enter the URL as: jdbc:mysql://:password>@localhost:63306/
The following thread might be helpful for you as well [How do I connect to my MySQL service on Pivotal Cloud Foundry (PCF) via MySQL Workbench or CLI or MySQLWeb Database Management App?](https://stackoverflow.com/questions/54284031/how-do-i-connect-to-my-mysql-service-on-pivotal-cloud-foundry-pcf-via-mysql-wo/65202309#65202309)
Upvotes: -1
|
2018/03/16
| 1,209 | 4,374 |
<issue_start>username_0: I have a Db table listing media files which have been archived to LTO (4.3 million of them). The ongoing archiving process is manual, carried out by different people as and when downtime arises. We need an efficient way of determining which files in a folder are **not** archived so we can complete the job if needed, or confidently delete the folder if it's all archived.
(For the sake of argument let's assume all filenames are unique, we do need to handle duplicates but that's not this question.)
I should probably just fire up Perl/Python/Ruby and talk to the Db thru them. But it would take me quite a while to get back up to speed in those and I have a nagging feeling that it would be overkill.
I can think of a two simpler approaches, but each has drawbacks and I wonder if there's a yet better way?
**Method 1:** is to simply bash-recurse down each directory structure, invoking sqlite3 per-file and outputting the filename if the query returns and empty result
This is probably less efficient than
**Method 2:** recurse through the directory structure and produce an sql file which will:
* create a table with all our on-disk files in it (let's call it the "working table")
* compare that with the archive table - select all files in the working table but not in the archive table
* destroy the working table, or quit without saving
While 2 seems likely more efficient than 1, it seems that building the comparison table in the first place might incur some overhead and I did kind of imagine the backup table as a monolithic read-only thing that people refer to and don't write into.
Is there any way in pure SQL to just output a list of not-founds (without them existing in another table)?<issue_comment>username_1: Finding values not in some other table is easy:
```
SELECT *
FROM SomeTable
WHERE File NOT IN (SELECT File
FROM OtherTable);
```
To create the other table, you can write a series of INSERT statements, or just use the `.import` command of the [shell](http://www.sqlite.org/cli.html) from a plain text file.
A [temporary table](http://www.sqlite.org/lang_createtable.html) will not be saved.
Upvotes: 1 <issue_comment>username_2: Sooo, I think I have to answer my own question.
tl;dr - **use a scripting language** (the thing I was hoping to avoid)
Trying that and the other two approaches (details below) on my system yields the following numbers when checking a 33-file directory structure against the 4.3 million record Db:
**A Ruby script**: 0.27s
**Bash running sqilte3 once per file ("Method 1")**: 0.73s
**SQL making a temp table and using "NOT IN" (Method 2)**: 8s
The surprising thing for me is that the all-sql is an order of magnitude slower than bash. This was true using the macOS (10.12) commandline sqlite3 and the GUI "DB Browser for SQLite"
The details
===========
Script method
-------------
This is the crux of my Ruby script. Ruby of course is not the fastest language out there and you could probably do better than this (but if you *really* need speed, it might be time for C)
```
require "sqlite3"
db = SQLite3::Database.open 'path/to/mydb.db'
# This will skip Posix hidden files, which is fine by me
Dir.glob("search_path/**/*") do |f|
file = File.stat(f)
next unless file.file?
short_name = File.basename(f)
qouted_short_name = short_name.gsub("'", "''")
size = File.size(f)
sql_cmd = "select * from 'Backup_Table' where filename='#{qouted_short_name}' and sizeinbytesincrsrc=#{size}"
count = db.execute(sql_cmd).length
if count == 0
puts "UNARCHIVED: #{f}"
end
end
```
(Note the next two are Not The Answer, but I'll include them if anyone wants to check my methodology)
Bash
----
This is a crude Bash recurse-through-files which will print a list of files that *are* backed up (not what I want, but gives me an idea of speed):
```
#! /bin/bash
recurse() {
for file in *; do
if [ -d "${file}" ]; then
thiswd=`pwd`
(cd "${file}" && recurse)
cd "${thiswd}"
elif [ -f "${file}" ]; then
fullpath=`pwd`${file}
filesize=`stat -f%z "${file}"`
sqlite3 /path/to/mydb.db "select filename from 'Backup_Table' where filename='$file'"
fi
done
}
cd "$1" && recurse
```
SQL
---
[CL](https://stackoverflow.com/users/11654/cl) has detailed method 2 nicely in his/her answer
Upvotes: 0
|
2018/03/16
| 897 | 2,981 |
<issue_start>username_0: I am trying to sort an array of integers so that the evens print out in descending order FIRST and then the odd numbers in the array print out in ascending order.
```
So the output would look like:
8 6 4 2 1 3 5 7 9
```
How would I go about doing this?
```
#include
#include
int compare(const void \*p, const void \*q);
void printArr(int arr[], int n);
//Driver program to test sort
int main()
{
int nums[] = {1, 2, 3, 4, 5, 6, 7, 8, 9};
int size = sizeof(nums) / sizeof(nums[0]);
qsort((void\*) nums, size, sizeof(nums[0]), compare);
printf("Sorted array is\n");
printArr(nums, size);
return 0;
}
//This function is used in qsort to decide the relative order of elements at addresses p and q
int compare(const void \*p, const void \*q)
{
return ( \*(int\*)p < \*(int\*)q);
}
//A utility function to print an array
void printArr(int arr[], int n)
{
int i;
for (i = 0; i < n; ++i)
printf("%d ", arr[i]);
}
```<issue_comment>username_1: As it is your comparison function does not match what qsort is looking for. The rules are as follows (assuming the first input is `p`, second input is `q`):
* `-1` means that p should come before q
* `0` means that they are equal
* `1` means that q should come before p
That being said, the key that will allow you make this work is to change your comparison function to deal with even/odd numbers differently. More specifically, you need to check if the first is even and the second is odd, first is odd second is even, both even, or both odd, and act accordingly.
Here is the setup (keep in mind that modulus is expensive, so this isn't the most efficient):
```
int first = *(int *)p;
int second = *(int *)q;
int firstIsOdd = first % 2; // Returns 1 if odd, 0 if even
int secondIsOdd = second % 2;
```
Now that you know which one is even and which one is odd you can deal with each case differently. Here are the rules:
1. First is **odd** and second is **even**? Return `1` so that second comes before first in the list.
2. First is **even** and second is **odd**? Return `-1` so that first comes before second in the list.
3. Both **odd**? Compare them normally to achieve ascending order (i.e., return `0` if equal, `-1` if first is less than second, `1` if second is less than first)
4. Both **even**? Compare them, but do the opposite of what you would normally do to achieve descending order (i.e., return `0` if equal, `-1` if second is less than first, `1` if first is less than second)
Upvotes: 2 <issue_comment>username_2: In the compare function, test whether p and q are even numbers. There are four possible combinations of even and odd numbers {ee, eo, oe, oo}. For example, if they are both even, compare them one way e.g. return `p>q` the `((int*)p < *(int*)q)` that you have plus, in this case, false needs to be negative. If p is even then p is lesser, if p is odd and q is even, then p is greater. If they are both odd compare them the other way e.g. `p.`
Upvotes: 1
|
2018/03/16
| 412 | 1,393 |
<issue_start>username_0: I have never used patches with Git before and I need some help. I am trying to apply a patch to a Git repo to test a Wine patch, specifically [this patch here](https://bugs.winehq.org/attachment.cgi?id=60752). So I did the following:
```
$ git clone git://source.winehq.org/git/wine.git
$ cd wine
$ nano patch.p1
```
I then pasted the content of the patch with `Ctrl`+`Shift`+`V`and used `Ctrl`+`O` to save. Then I tried this:
```
$ git am patch.p1
Patch format detection failed.
```
What am I doing wrong? I have never applied a patch before.<issue_comment>username_1: `Patch format detection failed.` probably means you're using the wrong command: use `git apply` instead of `git am` or the other way around.
See [What is the difference between git am and git apply?](https://stackoverflow.com/questions/12240154/what-is-the-difference-between-git-am-and-git-apply) for more on the difference between the 2.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Use below command:
patch -p1 < patch\_file\_name.patch
* You will be asked to specify "File to patch", mention complete path /
* Assume -R [n]: n
* Apply anyway? [n]: y
Do for all files present in you patch.
* If any merge conflict occurs then check the conflict in ".rej" file which has
been generated and resolve & apply those changes.
* do "git add " and "commit" your changes.
Upvotes: 3
|
2018/03/16
| 1,061 | 3,482 |
<issue_start>username_0: For making a circular `UIView` I am using the `cornerRadius` property.
I have a UIView with dimension 79\*158.
```
redView.layer.cornerRadius = redView.frame.size.height/2
redView.layer.masksToBounds = true
```
It shows elipse instead of circle:

Any workaround or does it only work with square type (eg. UIView(100\*100))?
I am ok if it resizes dynamically.<issue_comment>username_1: can't.
Try resize `UIView` to square: 79\*79 OR 158\*158
And set:
```
redView.layer.cornerRadius = redView.frame.size.height/2
```
Upvotes: 0 <issue_comment>username_2: use this...
```
func makeCircle (view: UIView) {
view.clipsToBounds = true
let height = view.frame.size.height
let width = view.frame.size.width
let newHeight = min(height, width) // use "max" if you want big circle
var rectFrame = view.frame
rectFrame.size.height = newHeight
rectFrame.size.width = newHeight
view.frame = rectFrame
view.layer.cornerRadius = newHeight/2
}
```
use like this:
```
@IBOutlet var rectView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
makeCircle(view: rectView)
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You have a UIView with dimension 79\*158.So that is wrong. You should have exactly **same height and width** for rounding exact a view to circle shape.
**E.g.**
```
redView.frame.size.height = 79.0
redView.frame.size.width = 79.0
```
**or**
```
redView.frame.size.height = 158.0
redView.frame.size.width = 158.0
```
And apply corner radius like:
```
redView.clipsToBounds = true
redView.layer.cornerRadius = redView.frame.size.height / 2.0
```
**Result:**
[](https://i.stack.imgur.com/AFItv.png)
**Note**: Check your constrains also If you are using Auto Layout. Be sure view frame doesn't change.
Upvotes: 0 <issue_comment>username_4: If you are using constraints then changing the frame/bounds of the view is not a good idea. Instead you should do the following.
1. If the view is contained in a `UIViewController` then set the `cornerRadius` in `viewDidLayoutSubviews` method
2. And if the view is itself a subclass of `UIView` the set the `cornerRadius` in `layoutSubviews` method
Upvotes: 0 <issue_comment>username_5: Only Squire view make a perfect circle. For example, if your view size is (10\*10),(50\*50),(100\*100), etc. then your view becomes perfect squire else not.
Upvotes: 0 <issue_comment>username_6: Using IBDesignable, you can display without project run in storyboard ox .XIB **#simple way**
**Step 1.** Subclass UIView:
```
@IBDesignable class RoundedCornerView: UIView {
@IBInspectable var borderWidth:CGFloat = 2 {
didSet {
layer.borderWidth = borderWidth
}
}
@IBInspectable var borderColor:UIColor = UIColor.orangeGradientLight {
didSet {
layer.borderColor = borderColor.cgColor
}
}
override func layoutSubviews() {
super.layoutSubviews()
layer.cornerRadius = frame.height/2
layer.masksToBounds = true
layer.borderColor = borderColor.cgColor
layer.borderWidth = borderWidth
}
}
```
**Step 2.** Set custom class in identity inspector:
[](https://i.stack.imgur.com/3zkFQ.png)
Upvotes: 0
|
2018/03/16
| 2,061 | 6,226 |
<issue_start>username_0: I am creating code to operate a robot. It is supposed to tell the robot when to turn depending of the reading it gets from its sensors. I tried an if statement for the turning it, and was not happy with how the robot turned. I felt a while loop was better as it would not have to go through the entire code to keep checking if it should continue to turn, and would stay in the while loop until the turn is complete. The issue I am having is that the code does not pull readings from the sensors and goes directly into the while loop and stays there. How can I resolve this issue?
```
>#include // Import the serial Library
#include
#include
#include "utility/Adafruit\_MS\_PWMServoDriver.h"
// Create the motor shield object with the default I2C address
Adafruit\_MotorShield AFMS = Adafruit\_MotorShield();
// Select which 'port' M1, M2, M3 or M4. In this case, M1
Adafruit\_DCMotor \*FL= AFMS.getMotor(1); //Front left motor
Adafruit\_DCMotor \*FR= AFMS.getMotor(4); //Front right motor
int left\_trig = 8;
int left\_echo = 9;
int mid\_trig = 5;
int mid\_echo = 6;
int right\_trig = 3;
int right\_echo = 4;
long duration1, duration2, duration3, inches1, inches2, inches3;
void setup() {
Serial.begin(9600);
AFMS.begin(); // create with the default frequency 1.6KHz
//This establishes the sensors as inputs and outputs
pinMode(left\_trig,OUTPUT);
pinMode(left\_echo,INPUT);
pinMode (mid\_trig,OUTPUT);
pinMode(mid\_echo,INPUT);
pinMode (right\_trig,OUTPUT);
pinMode(right\_echo,INPUT);
FL->setSpeed(150);
FL->run(FORWARD);
//FL->run(RELEASE);
FR->setSpeed(150);
FR->run(BACKWARD);
//FR->run(RELEASE);
}
void loop() {
digitalWrite(left\_trig, LOW);
delayMicroseconds(2);
digitalWrite(left\_trig, HIGH);
delayMicroseconds(10);
duration1 = pulseIn(left\_echo,HIGH);
// pinMode (mid\_trig,OUTPUT);
digitalWrite(mid\_trig, LOW);
delayMicroseconds(2);
digitalWrite(mid\_trig, HIGH);
delayMicroseconds(10);
duration2 = pulseIn(mid\_echo, HIGH);
// pinMode (right\_trig,OUTPUT);
digitalWrite(right\_trig, LOW);
delayMicroseconds(2);
digitalWrite(right\_trig, HIGH);
delayMicroseconds(10);
duration3 = pulseIn(right\_echo, HIGH);
// convert the time into inches
inches1 = microsecondsToInches(duration1);
inches2 = microsecondsToInches(duration2);
inches3 = microsecondsToInches(duration3);
FL->setSpeed(150);
FL->run(FORWARD);
FR->setSpeed(150);
FR->run(BACKWARD);
Serial.print(inches1);
Serial.print("in,\t");
Serial.print(inches2);
Serial.print("in,\t");
Serial.print(inches3);
Serial.print("in");
Serial.println();
while(inches3 <=8 && inches2 <=12){
// Serial.print(inches1);
// Serial.print("win,\t");
// Serial.print(inches2);
// Serial.print("win,\t");
// Serial.print(inches3);
// Serial.print("win");
// Serial.println();
FL->setSpeed(120);
FL->run(BACKWARD);
FR->setSpeed(120);
FR->run(BACKWARD);
}
FL->setSpeed(150);
FL->run(FORWARD);
FR->setSpeed(150);
FR->run(BACKWARD);
delay(50);
}
long microsecondsToInches(long microseconds) {
return microseconds / 74 / 2;
}
```<issue_comment>username_1: Nothing inside your while loop will ever change the values of inches3 or inches2. If they are less than 8 and 12 respectively to get you into the while loop then they'll still be that every time it comes back around to check. So this is an infinite loop.
Upvotes: 1 <issue_comment>username_2: Being that the value is never updated in the while loop, you should be able to add the code that checks the sensors inside your while loop like this:
```
#include // Import the serial Library
#include
#include
#include "utility/Adafruit\_MS\_PWMServoDriver.h"
// Create the motor shield object with the default I2C address
Adafruit\_MotorShield AFMS = Adafruit\_MotorShield();
// Select which 'port' M1, M2, M3 or M4. In this case, M1
Adafruit\_DCMotor \*FL= AFMS.getMotor(1); //Front left motor
Adafruit\_DCMotor \*FR= AFMS.getMotor(4); //Front right motor
int left\_trig = 8;
int left\_echo = 9;
int mid\_trig = 5;
int mid\_echo = 6;
int right\_trig = 3;
int right\_echo = 4;
long duration1, duration2, duration3, inches1, inches2, inches3;
void setup()
{
Serial.begin(9600);
AFMS.begin(); // create with the default frequency 1.6KHz
//This establishes the sensors as inputs and outputs
pinMode(left\_trig,OUTPUT);
pinMode(left\_echo,INPUT);
pinMode (mid\_trig,OUTPUT);
pinMode(mid\_echo,INPUT);
pinMode (right\_trig,OUTPUT);
pinMode(right\_echo,INPUT);
FL->setSpeed(150);
FL->run(FORWARD);
//FL->run(RELEASE);
FR->setSpeed(150);
FR->run(BACKWARD);
//FR->run(RELEASE);
}
void loop()
{
digitalWrite(left\_trig, LOW);
delayMicroseconds(2);
digitalWrite(left\_trig, HIGH);
delayMicroseconds(10);
duration1 = pulseIn(left\_echo,HIGH);
// pinMode (mid\_trig,OUTPUT);
digitalWrite(mid\_trig, LOW);
delayMicroseconds(2);
digitalWrite(mid\_trig, HIGH);
delayMicroseconds(10);
duration2 = pulseIn(mid\_echo, HIGH);
// pinMode (right\_trig,OUTPUT);
digitalWrite(right\_trig, LOW);
delayMicroseconds(2);
digitalWrite(right\_trig, HIGH);
delayMicroseconds(10);
duration3 = pulseIn(right\_echo, HIGH);
// convert the time into inches
inches1 = microsecondsToInches(duration1);
inches2 = microsecondsToInches(duration2);
inches3 = microsecondsToInches(duration3);
FL->setSpeed(150);
FL->run(FORWARD);
FR->setSpeed(150);
FR->run(BACKWARD);
Serial.print(inches1);
Serial.print("in,\t");
Serial.print(inches2);
Serial.print("in,\t");
Serial.print(inches3);
Serial.print("in");
Serial.println();
while(inches3 <=8 && inches2 <=12)
{
// convert the time into inches
inches1 = microsecondsToInches(duration1);
inches2 = microsecondsToInches(duration2);
inches3 = microsecondsToInches(duration3);
// Serial.print(inches1);
// Serial.print("win,\t");
// Serial.print(inches2);
// Serial.print("win,\t");
// Serial.print(inches3);
// Serial.print("win");
// Serial.println();
FL->setSpeed(120);
FL->run(BACKWARD);
FR->setSpeed(120);
FR->run(BACKWARD);
}
FL->setSpeed(150);
FL->run(FORWARD);
FR->setSpeed(150);
FR->run(BACKWARD);
delay(50);
}
long microsecondsToInches(long microseconds)
{
return microseconds / 74 / 2;
}
```
Upvotes: 0
|
2018/03/16
| 797 | 2,770 |
<issue_start>username_0: I have the below code in HTML5 for validating some names
```
```
It works, but I want to add to it an extra checking feature using regex, as follow:
1. the name is not allowed to start with sign period . or sign minus -
2.the name is not allowed to end with sign period . or sign minus -
I don't know how to use ^, $, ? to make these happened. ( I tried a few examples seen in the forums)
Also to that html code I have the below js code
```
(function checkName() {
let wantedNames = document.getElementById('desiredNames');
let form = document.getElementById('form');
let elem = document.createElement('div');
elem.id = 'notify';
elem.style.display = 'none';
form.appendChild(elem);
wantedNames.addEventListener('invalid', function(event){
event.preventDefault();
if ( ! event.target.validity.valid ) {
wantedNames.className = 'invalid animated shake';
elem.textContent = 'Name can contains small letters, numbers, dot . and minus -,but not at the beginning or at the end, ie escu.ionel-74';
elem.className = 'error';
elem.style.display = 'block';
}
});
wantedNames.addEventListener('input', function(event){
if ( 'block' === elem.style.display ) {
wantedNames.className = '';
elem.style.display = 'none';
}
});
```
})();
Any tips and some good resources for regex patterns?
Thx<issue_comment>username_1: You're probably looking for this:
```
^[^.-].{1,23}[^.-]$
```
`^` means starting at the beginning of string, except when inside `[]` which means `not`
`.` means any character except newline, but inside `[]` means period
`-` is generally used as a range inside `[]` (`A-z`) except when it is placed at the end where it just means the minus sign.
Probably a better explainer:
<https://regex101.com/r/OYRnMO/1>
Upvotes: 2 [selected_answer]<issue_comment>username_2: You will try below option.
"
"
it's working for me.
Upvotes: 0 <issue_comment>username_3: There are a lot of ways to achieve that, but the shortest would be to add `\b` word boundaries on both ends to require word chars at the start and end of the input. Note you do not need to use `^` (start of string) and `$` (end of string) anchors, because they are added automatically by the HTML5 engine when compiling the regex from the pattern.
So, you may use
```
pattern="\b[a-z0-9.-]{3,25}\b"
```
and it will be parsed as `/^(?:\b[a-z0-9.-]{3,25}\b)$/` matching 3 to 25 lowercase ASCII letters (add `A-Z` after `[` to also match uppercase ASCII letters), digits, `.` or `-`, but no `.` or `-` will be allowed at the start and end of the string.
See the demo:
```css
input:valid {
color: black;
}
input:invalid {
color: red;
}
```
Upvotes: 2
|
2018/03/16
| 689 | 2,837 |
<issue_start>username_0: I have started to building PWA without understanding the internal meaning of it. When I searched, PWA gives a native app like look and feel to web apps progressively using the following technology
1. Manifest
2. Service workers
3. Designing App shell
4. Push notification support for web apps
I found difficult to understand the inner meaning of Progressive in
PWA. Any help is much appreciated.<issue_comment>username_1: "Progressive Web Apps" has been foisted on the Web App development community by those of the left (especially California) who equate "progressive" with "liberal".
Most everyone else uses just "Web App" or "Ultimate Web App" to be annoying.
The "progressive" adjective could apply to the best-endeavour nature of a PWA in that it will degrade gracefully if certain functionality is unavailable or if that functionality is embargoed due to lack of User Permissions. Analogous to the limp-home-mode in your car.
I am told that Chrome now treats pWAs as first-class Android apps but I have yet to experience them in the App Drawer. Lighthouse struggles to distinguish an elbow let alone a PWA and is more someone's little red book of Web-Apps.
IMHO if it launches from the Homescreen or Desktop via and icon then it *is* a Web App.
Upvotes: 0 <issue_comment>username_2: From what I have understood a progressive web app needs to support progressive enhancement.
Progressive enhancement is a strategy that begins with common browser features, and then adds in functionality or enhancements when the user's browser supports more modern technologies.
Features like service worker, notifications will be added only if the browser supports it and not break the application in older browsers.
This enables your application to run properly in a wide range of browsers with decent user experience.
[You can find more details here](https://developers.google.com/web/ilt/pwa/introduction-to-progressive-web-app-architectures)
Upvotes: 2 <issue_comment>username_3: As explained in [this answer](https://stackoverflow.com/a/49366789/1057093), all PWA features are not supported in every browser/platform yet (mar-2018). Also, new PWA features will be keep coming as the technology evolves. Said that, for you to have a PWA site, you don't have to implement all PWA features to take advantages it provided or to call it a PWA site.
This is not like a standards compliance, where you have to be 100% compliant to call it a "Compliant" site.
You can start with the basic things like having a manifest file to support "Add to home screen" and may be offline capability using service workers and your site is already taking good advantage of PWA.
**You can keep *"Progressing"* on more features as you continue to develop your app. And hence its *"Progressive"* web application :)**
Upvotes: 2
|
2018/03/16
| 321 | 1,245 |
<issue_start>username_0: showing this error: **XHR failed to load: AJAX**
This is ajax code:
```
$.ajax({
url:"php echo base_url();?food/register_user/",
type:"ajax",
traditional:true,
data:{data},
dataType:"json",
success:function(data){
console.log(data);
alert(data);
},
error: function() {
alert("Error");
}
});
```<issue_comment>username_1: I think the closing braces on the error function is the problem it should look like this
```
$.ajax({
url: "php echo base_url();?food/register_user/",
type: "ajax",
traditional: true,
data: {
data
},
dataType: "json",
success: function(data) {
console.log(data);
alert(data);
},
error: function() {
alert("Error");
}
});
```
Upvotes: 1 <issue_comment>username_2: Try this
```
$.ajax({
url:"php echo base_url();?food/register_user/",
type:"post",
traditional:true,
data:{data},
dataType:"json",
success:function(data){
console.log(data);
alert(data);
},
error: function() {
alert("Error");
} });
```
Upvotes: 2
|
2018/03/16
| 483 | 1,745 |
<issue_start>username_0: I want to run a laravel cron job in order to run a command on windows 10 using task scheduler, I tried to create a basic task in scheduler but it shows running but data doesnt add in db. When I run "php artisan schedule:run" it works perfectly. I am using Laravel and Homestead.
[](https://i.stack.imgur.com/13WB7.jpg)
I have adding these two lines while creating the task in scheduler
C:\xampp\php\php.exe (why do we have to add this when I don't even use xampp anymore, so I think this is the part which is giving issues?????)
C:\projects\project-name\artisan schedule:run
I would really appreciate, if someone could guide me through this, thanks.<issue_comment>username_1: Update your task scheduler command to this:
```
C:\xampp\php\php.exe C:\projects\project-name\artisan schedule:run
```
`C:\xampp\php\php.exe` does not mean using xampp, we're just using *php* here which is coincidentally found inside your xampp folder, because we need the executable php to run the file `artisan` with the parameter `schedule:run` which is found in `C:\projects\project-name\`
You can add an environment variable for your executable php so you could just write the command as `php C:\path\to\artisan schedule:run`.
Also, try to see the logs of task scheduler, so you can see what it tried to do.
As per your issue. Yes, `C:\xampp\php\php.exe` causes an issue. Try typing in your `cmd` that command. What happens? It's just paused there. That's also what's happening in your scheduled task.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try to run this command,
```
php artisan schedule:work
```
Upvotes: 1
|
2018/03/16
| 265 | 901 |
<issue_start>username_0: I'm trying to surround a link with quotes when using Asciidoctor and I can't get it to work. It either includes the quotes in the link:
```
http://link.to.something["Title"]
```
Or it renders the raw text:
```
"http://link.to.something[Title]"
```
Does anyone know the syntax so it can render like the followin (HTML version)?
```
"[Title](http://link.to.something)"
```<issue_comment>username_1: This is a case where the `link:` macro prefix is needed.
```
"link:http://link.to.something[Title]"
```
You can think of the `link:` macro prefix as a way to force a link. It's kind of like an unconstrained link. It has stronger precedence than the `http:` prefix.
Upvotes: 3 [selected_answer]<issue_comment>username_1: If you put smart quotes around the link, then the `link:` macro prefix is not needed.
```
"`link:http://link.to.something[Title]`"
```
Upvotes: 2
|
2018/03/16
| 408 | 1,426 |
<issue_start>username_0: I have a button link that looks like this in the HTML source:
```
Cert
```
On the same page I have the following script defined. I want to redirect if the user click OK in the confirm popup. But nothing pops up when I click on the button. Any idea why?
(I tried to format the script nicer, but if I try putting the `window.location.assign` line on a separate line the editor here won't let me format it.)
```
function ewAttestConfirm(idx) {
var r = confirm("Click OK to attest that you viewed this entire program.");
if (r == true) {window.location.assign("https://example.com/ewAttestToHoursAndRedirectToCert/?idx=idx")}
}
```<issue_comment>username_1: your javascript function should be like this,
```
function ewAttestConfirm(idx) {
var r = confirm("Click OK to attest that you viewed this entire program.");
if (r) {
window.location.assign("https://example.com/ewAttestToHoursAndRedirectToCert/?idx="+idx);
}
}
```
your variable *idx* should not be inside the string.And you can check the value true just by *if(variableName)*
you can see the working example below,
<https://jsbin.com/ritikivave/edit?html,js,output>
Upvotes: 2 [selected_answer]<issue_comment>username_2: ```
function ewAttestConfirm(idx) {
if (window.confirm("Click OK to attest that you viewed this entire program."))
{
window.location.assign("https://google.com)
}
}
```
This code will work
Upvotes: 2
|
2018/03/16
| 1,092 | 4,108 |
<issue_start>username_0: The description of the `Task.WhenAny` method says, that it will return the first task finished, even if it's faulted. Is there a way to change this behavior, so it would return first successful task?<issue_comment>username_1: Something like this should do it (may need some tweaks - haven't tested):
```
private static async Task WaitForAnyNonFaultedTaskAsync(IEnumerable tasks)
{
IList customTasks = tasks.ToList();
Task completedTask;
do
{
completedTask = await Task.WhenAny(customTasks);
customTasks.Remove(completedTask);
} while (completedTask.IsFaulted && customTasks.Count > 0);
return completedTask.IsFaulted?null:completedTask;
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: First off, from my review there is no direct way of doing this without waiting for all the tasks to complete then find the first one that ran successfully.
To start with I am not sure of the edge cases that will cause issues that I havent tested, and given the source code around tasks and contiunuation requires more than an hour of review I would like to start to think around the follow source code. Please review my thoughts at the bottom.
```
public static class TaskExtensions
{
public static async Task WhenFirst(params Task[] tasks)
{
if (tasks == null)
{
throw new ArgumentNullException(nameof(tasks), "Must be supplied");
}
else if (tasks.Length == 0)
{
throw new ArgumentException("Must supply at least one task", nameof(tasks));
}
int finishedTaskIndex = -1;
for (int i = 0, j = tasks.Length; i < j; i++)
{
var task = tasks[i];
if (task == null)
throw new ArgumentException($"Task at index {i} is null.", nameof(tasks));
if (finishedTaskIndex == -1 && task.IsCompleted && task.Status == TaskStatus.RanToCompletion)
{
finishedTaskIndex = i;
}
}
if (finishedTaskIndex == -1)
{
var promise = new TaskAwaitPromise(tasks.ToList());
for (int i = 0, j = tasks.Length; i < j; i++)
{
if (finishedTaskIndex == -1)
{
var taskId = i;
#pragma warning disable CS4014 // Because this call is not awaited, execution of the current method continues before the call is completed
//we dont want to await these tasks as we want to signal the first awaited task completed.
tasks[i].ContinueWith((t) =>
{
if (t.Status == TaskStatus.RanToCompletion)
{
if (finishedTaskIndex == -1)
{
finishedTaskIndex = taskId;
promise.InvokeCompleted(taskId);
}
}
else
promise.InvokeFailed();
});
#pragma warning restore CS4014 // Because this call is not awaited, execution of the current method continues before the call is completed
}
}
return await promise.WaitCompleted();
}
return Task.FromResult(finishedTaskIndex > -1 ? tasks[finishedTaskIndex] : null);
}
class TaskAwaitPromise
{
IList \_tasks;
int \_taskId = -1;
int \_taskCount = 0;
int \_failedCount = 0;
public TaskAwaitPromise(IList tasks)
{
\_tasks = tasks;
\_taskCount = tasks.Count;
GC.KeepAlive(\_tasks);
}
public void InvokeFailed()
{
\_failedCount++;
}
public void InvokeCompleted(int taskId)
{
if (\_taskId < 0)
{
\_taskId = taskId;
}
}
public async Task WaitCompleted()
{
await Task.Delay(0);
while (\_taskId < 0 && \_taskCount != \_failedCount)
{
}
return \_taskId > 0 ? \_tasks[\_taskId] : null;
}
}
}
```
The code is lengthy I understand and may have lots of issues, however the concept is you need to execute all the tasks in parallel and find the first resulting task that completed successfully.
If we consider that we need to make a continuation block of all the tasks and be able to return out of the continuation block back to the original caller. My main concern (other than the fact I cant remove the continuation) is the `while()` loop in the code. Probably best to add some sort of CancellationToken and/or Timeout to ensure we dont deadlock while waiting for a completed task. In this case if zero tasks complete we never finish this block.
**Edit**
I did change the code slightly to signal the promise for a failure so we can handle a failed task. Still not happy with the code but its a start.
Upvotes: 0
|
2018/03/16
| 868 | 3,418 |
<issue_start>username_0: I'm trying to implement Codable for a class which contains a NSAttributedString, but I get errors at compile time:
```
try container.encode(str, forKey: .str)
```
>
> error ambiguous reference to member 'encode(\_:forKey:)'
>
>
>
and
```
str = try container.decode(NSMutableAttributedString.self, forKey: .str)
```
>
> error: No 'decode' candidates produce the expected contextual type 'NSAttributedString'
>
>
>
I can get around it by using the NSData from the string, but thought this should work I would have thought
```
class Text : Codable {
var str : NSAttributedString
enum CodingKeys: String, CodingKey {
case str
}
func encode(to encoder: Encoder) throws {
var container = encoder.container(keyedBy: CodingKeys.self)
try container.encode(str, forKey: .str). <-- error ambiguous reference to member 'encode(_:forKey:)'
}
required init(from decoder: Decoder) throws {
let container = try decoder.container(keyedBy: CodingKeys.self)
str = try container.decode(NSMutableAttributedString.self, forKey: .str) <-- error: No 'decode' candidates produce the expected contextual type 'NSAttributedString'
}
}
```<issue_comment>username_1: If you want to decode&encode the textual content only and convert it back from/to `NS(Mutual)AttributedString` you can try something like that:
```
class Text : Codable {
var str : NSMutableAttributedString?
enum CodingKeys: String, CodingKey {
case str
}
func encode(to encoder: Encoder) throws {
var container = encoder.container(keyedBy: CodingKeys.self)
try? container.encode(str?.string, forKey: .str)
}
required init(from decoder: Decoder) throws {
let container = try decoder.container(keyedBy: CodingKeys.self)
if let content = try? container.decode(String.self, forKey: .str){
str = NSMutableAttributedString(string: content)
// ADD any attributes here if required...
}
}
}
```
Upvotes: -1 <issue_comment>username_2: `NSAttributedString` doesn’t conform to `Codable`, so you can’t do this directly.
If you just want to store the data you can implement a simple wrapper around your attributed string that conforms to Codable and use that in your data class. This is rather easy since you can convert the attributed string to `Data` using [`data(from:documentAttributes)`](https://developer.apple.com/documentation/foundation/nsattributedstring/1534090-data) when encoding. When decoding you first read the data and then initialize your attributed string using [`init(data:options:documentAttributes:)`](https://developer.apple.com/documentation/foundation/nsattributedstring/1524613-init). It supports [various data formats](https://developer.apple.com/documentation/foundation/nsattributedstring/documenttype) including HTML, RTF and Microsoft Word.
Resist the temptation to add a Codable conformance to NSAttributedString in an extension. This will cause trouble when Apple adds this conformance or you add another library that does this.
If, on the other hand, you need this to communicate with a server or some other program, you need to exactly match the required data format. If you only need a plain string you probably should not use NSAttributedString at all. If it is some other format like markdown you could implement a wrapper that does the necessary transforms.
Upvotes: 4 [selected_answer]
|
2018/03/16
| 400 | 1,521 |
<issue_start>username_0: Trying to create SSO for AWS keeping Azure users as source of Truth. Followed below Tut's.
1. <https://learn.microsoft.com/en-us/azure/active-directory/active-directory-saas-amazon-web-service-tutorial>
2. <http://blog.flux7.com/aws-best-practice-azure-ad-saml-authentication-configuration-for-aws-console>
Anything is to be more precise with user attributes in Azure ? Has anything to enable in AWS to accept the SSO ?
Login is successful(Can see signin's in Azure AD) but it displays message "Your request included an invalid SAML response. To logout, click here
". Any idea what has gone wrong ?
<issue_comment>username_1: Yes, I think you are on the right path. It seems that you are missing the custom attributes which we are suggesting to add for your application. Those are Role and RoleSessionName. Please see the step #5 in my article <https://learn.microsoft.com/en-us/azure/active-directory/active-directory-saas-amazon-web-service-tutorial> and make sure that you use the same casing and namespace for the claims. With that the integration should work correctly.
Upvotes: 1 <issue_comment>username_2: May be I am late to this post. As Jeevan mentioned. You are missing custom attributes that you need to add. I have been struggling with same and found [this well explained video](https://www.youtube.com/watch?v=IktAr8sHlc4&feature=youtu.be). I hope, this will help any one who is struggling with this issues.
Upvotes: 0
|
2018/03/16
| 1,607 | 4,331 |
<issue_start>username_0: Can someone have a look at my code and tell me what the problem is?
I am trying to build a jquery function that counts all list item elements that contains a specific text.
The bit where it should count all "li"s works:
$(allLip).text(allLi.length);
but the function where is is supposed to count all "li"s with a specific text doesn't work.
Is there any way that I can achieve this with pure JS?
Thanks in advance.
```js
var allLi = $("#lists ol li");
var allLip = $("#all-li");
var numP = $("#num-p");
var numS = $("#num-s");
var numP12 = $("#num-p-12");
var pText = "(p)";
var sText = "(s)";
var p12Text = "(p-12)"
$(allLip).text(allLi.length);
$(allLi).each(function(){
var pCounter = 0;
var sCounter = 0;
var p12Counter = 0;
if($(allLi).has(pText)){
pCounter++;
$(numP).text(pCounter);
}
if($(allLi).has(sText)){
sCounter++;
$(numS).text(sCounter);
}
if($(allLi).has(p12Text)){
p12Counter++;
$(numP12).text(p12Counter);
}
});
```
```html
1. list 1 - (p)
2. list 2 - (s)
3. list 3 - (p-12)
4. list 4 - (p)
5. list 5 - (p)
6. list 6 - (s)
7. list 7 - (p-12)
8. list 8 - (p-12)
```<issue_comment>username_1: When you are looping thru the items, you need to pass the current element in the loop and evaluate it (you are currently evaluating all the items in every loop)
See demo below:
```js
var allLi = $("#lists ol li");
var allLip = $("#all-li");
var numP = $("#num-p");
var numS = $("#num-s");
var numP12 = $("#num-p-12");
var pText = "(p)";
var sText = "(s)";
var p12Text = "(p-12)"
// get all LI elements
$(allLip).text(allLi.length);
// init counters
var pCounter = 0;
var sCounter = 0;
var p12Counter = 0;
// loop thru LI items
$(allLi).each(function(idx, liItem) {
// if the item has the pText, count it
if ($(liItem).text().indexOf(pText) != -1) {
pCounter++;
}
// if the item has the sText, count it
if ($(liItem).text().indexOf(sText) != -1) {
sCounter++;
}
// if the item has the p12Text, count it
if ($(liItem).text().indexOf(p12Text) != -1) {
p12Counter++;
}
});
// display results
$(numP).text(pCounter);
$(numP12).text(p12Counter);
$(numS).text(sCounter);
```
```html
1. list 1 - (p)
2. list 2 - (s)
3. list 3 - (p-12)
4. list 4 - (p)
5. list 5 - (p)
6. list 6 - (s)
7. list 7 - (p-12)
8. list 8 - (p-12)
LIs:
LIs w/P:
LIs w/S:
LIs w/P-12:
```
Upvotes: 1 <issue_comment>username_2: There is no need for a loop, you can simply use jQuery [`:contains`](https://api.jquery.com/contains-selector/) selector with [`filter()`](https://api.jquery.com/filter) method.
```js
var allLi = $("#lists ol li");
var allLip = $("#all-li");
var numP = $("#num-p");
var numS = $("#num-s");
var numP12 = $("#num-p-12");
var pText = "(p)";
var sText = "(s)";
var p12Text = "(p-12)"
allLip.text(allLi.length);
numP.text(allLi.filter(':contains("' + pText + '")').length);
numS.text(allLi.filter(':contains("' + sText + '")').length);
numP12.text(allLi.filter(':contains("' + p12Text + '")').length);
```
```html
1. list 1 - (p)
2. list 2 - (s)
3. list 3 - (p-12)
4. list 4 - (p)
5. list 5 - (p)
6. list 6 - (s)
7. list 7 - (p-12)
8. list 8 - (p-12)
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Hope this one will help you.
```
$(document).ready(function () {
var allLip = $("#all-li");
var numP = $("#num-p");
var numS = $("#num-s");
var numP12 = $("#num-p-12");
var pText = "(p)";
var sText = "(s)";
var p12Text = "(p-12)"
var allLi = 'li';
$(allLip).text($(allLi).length);
var pCounter = 0;
var sCounter = 0;
var p12Counter = 0;
var params = [pCounter, sCounter, p12Counter];
$(allLi).each(function () {
if ($(this).is(':contains("' + pText + '")')) {
params[0]++;
}
if ($(this).is(':contains("' + sText + '")')) {
params[1]++;
}
if ($(this).is(':contains("' + p12Text + '")')) {
params[2]++;
}
}, params);
$(numP).text(params[0]);
$(numS).text(params[1]);
$(numP12).text(params[2]);
});
```
Upvotes: 1
|
2018/03/16
| 1,052 | 3,767 |
<issue_start>username_0: I have a problem with retrieve data from Ajax. I want to add button "X" to close popup after show data.
I tried to add a button like:
```
jQuery(".popup").append('x');
```
in to popup to the user can close my popup.
My code like:
```
function getData(domain){
var dataString = "domain=" + domain + "&security="+ mhdomain.security + "&action=getdomain" ;
jQuery.ajax({
action: "getDomain",
type: "post",
url: mhdomain.ajaxurl,
dataType: "json",
data: dataString,
beforeSend: function(xhr){
jQuery("#wrapper").append('');
jQuery(".popup").append("");
jQuery(".popup").fadeIn();
jQuery(".popup").append('x');
},
success : function(data){
jQuery(".popup").html(data.data);
},
});
};
function popupout(popup){
jQuery(popup).fadeOut(1000);
jQuery(popup).remove();
}
```
My CSS code:
```
.popup {
position: fixed;
display: none;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
z-index: 99999;
max-width: 780px;
width: 100%;
max-height: 480px;
height: 100%;
display: inline-block;
cursor: pointer;
background-color: white;
padding: 30px 50px;
color: black;
overflow-y: scroll;
border: 1px solid #f2f2f2;
}
```<issue_comment>username_1: In your `beforeSend` event you are appending the html which is fine. But it gets replaced in success event.
in this line the issue is
```
jQuery(".popup").html(data.data);
```
But instead of you just append data.data here also which will fix the issue. like this
```
jQuery(".popup").append(data.data);
```
---
For your question in comment, still you can use beforeSend event. Like this
```
beforeSend:function(){
// after your codes
jQuery(".popup").append("");
}
success:function(){
// after your codes
jQuery(".popup").find('.loading').remove();
}
```
You can tweak the appearance of the images using css. But I shared an idea for how to achieve that
Upvotes: 3 [selected_answer]<issue_comment>username_2: The close button is being overwritten by `jQuery(".popup").html(data.data);` because you are replacing everything inside of `.popup`.
I have some of your code in the snippet below using .append instead of .html and your button appears and it works.
I would just put a placeholder in the popup that houses the loading gif and replace that div's html with the data.
```js
function getData(){
jQuery("#wrapper").append('');
jQuery(".popup").append("");
jQuery(".popup").fadeIn();
jQuery(".popup").append('x');
jQuery(".popup").append('popup data');
};
function popupout(popup){
jQuery(popup).fadeOut(1000);
jQuery(popup).remove();
}
getData();
```
```html
```
Upvotes: 0 <issue_comment>username_3: You can add your `close button` html in success.
```
function getData(domain){
var dataString = "domain=" + domain + "&security="+ mhdomain.security + "&action=getdomain" ;
jQuery.ajax({
action: "getDomain",
type: "post",
url: mhdomain.ajaxurl,
dataType: "json",
data: dataString,
beforeSend: function(xhr){
jQuery("#wrapper").append('');
jQuery(".popup").append("");
jQuery(".popup").fadeIn();
jQuery(".popup").append('x');
},
success : function(data){
jQuery(".popup").html(''+data.data);
},
});
};
function popupout(popup){
jQuery(popup).fadeOut(1000);
jQuery(popup).remove();
}
```
Upvotes: 1
|
2018/03/16
| 1,197 | 5,073 |
<issue_start>username_0: I'm just starting out in React and am building an app that incorporates a video search through Youtube's API. I am trying to dynamically render video data in my "search-results" div after querying the API for videos.
This "search-results" div is supposed to render an array of JSX, titled searchResults, that is stored as a state key-value pair of the AddSong component. Upon component construction, this array is empty, and it is supposed to be populated in the .then(function(response){} function of the axios get request (in my search function).
In this function, I am extracting relevant data from the API response item by item. For each item, I format data into JSX, and store it into a local array titled vidComponents. My problem arises when I call this.setState({searchResults: vidComponents}). 'This' is undefined. The exact error is stated below:
TypeError: Cannot read property 'setState' of undefined
After doing some research, I learned that most people get this error due to not binding 'this' to relevant functions correctly in the constructor, but this isn't the case for me. 'This' is bound to my search function in the constructor as follows:
this.search = this.search.bind(this)
Does anyone know what is causing this error? Here's my code:
```
import React, { Component } from 'react';
import axios from 'axios';
import { FormGroup, FormControl, Button } from 'react-bootstrap';
import './add-song-component.css';
class AddSong extends React.Component{
constructor(props, context){
super(props, context);
// Must bind this to functions in order for it to be accessible within them
this.handleTextChange = this.handleTextChange.bind(this);
this.search = this.search.bind(this);
// Object containing state variables
this.state = {
searchText: '',
searchResults: [],
nextPageToken: ''
};
}
// Keeps track of the current string in the search bar
handleTextChange(e){
this.setState({ searchText: e.target.value });
}
// Runs a search for 10 youtube videos through youtube's API
search(e){
// Initialize important data for formatting get request url
var api = this.props.apiKey;
var query = this.state.searchText;
var maxResults = 10;
// Execute get request on Youtube's search API
axios.get('https://www.googleapis.com/youtube/v3/search', {
params: {
part: 'snippet',
type: 'video',
order: 'viewCount',
key: api,
q: query,
maxResults: maxResults
}
})
.then(function (response) {
console.log(response.data);
var vidComponents = [];
// Loop through video data, extract relevant info
var i = 0;
for (let vidInfo of response.data.items) {
vidComponents.push(
title: {vidInfo.snippet.title}
channel: {vidInfo.snippet.channelTitle}
id: {vidInfo.id.videoId}
thumbnailUrl: {vidInfo.snippet.thumbnails.high.url}
);
i++;
}
// Set searchResults state to account for new results...
// Set nextPageToken state based on value from youtube's response
// *** used for retrieving more videos from initial query ***
this.setState({
searchResults: vidComponents,
nextPageToken: response.data.nextPageToken
});
})
.catch(function (error) {
console.log(error);
});
}
render(){
return(
Search
{this.state.searchResults}
);
}
}
export default AddSong;
```<issue_comment>username_1: ```
handleTextChange = e => {
this.setState({ searchText: e.target.value });
}
```
This syntax will bind `this` to the current class. While in your case the `this` will point to the current function scope so you are getting `undefined`
Upvotes: 0 <issue_comment>username_2: you are missing `this` binding to your `handleTextChange` function.So,its not invoked with Component `this` object.
**solution1 : using binding inside of constructor**
```
constructor(props, context){
super(props, context);
//...
this. handleTextChange = this.handleTextChange.bind(this);
}
```
Solution2 : using **[`arrow function`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions)**.
```
handleTextChange =(e)=>{
this.setState({ searchText: e.target.value });
}
```
Upvotes: -1 <issue_comment>username_3: IMHO both your `search` and `handleSearch` functions are scoped well in the constructor. That shouldn't be a problem . But i do see a `setState` inside your `search` func:
`axios.get(...).then(function() { this.setState(...) }`
It would be better off if you can use an arrow function instead !
`axios.get(...).then(() => { this.setState(...) }`
Upvotes: 2 [selected_answer]
|
2018/03/16
| 390 | 1,402 |
<issue_start>username_0: I need to write a code to jump to a particular given line number in gVim. But I need to do this using a C++. Is there any way to do it?
There are ways to open the gvim by executing the Linux commands using C++ code. But is there a way to execute the gVim command using the same C++ code?<issue_comment>username_1: Do you mean the function `system` that runs linux commands? You can use it for any command and command parameters. Bellow is the example, modify it according to your requirements.
```
int line_no = 576;
std::string file_name;
std::stringstream command;
command << "gvim +" << line_no << " " << filename;
system(command.str().c_str());
```
It will execute `gvim +576 file_name`, that opens a file at a particular line in gvim.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can ask `vim` to run commands from the command line. See [the list of vim command line options](http://vimdoc.sourceforge.net/htmldoc/starting.html#startup-options).
To position cursor on the line `num` run `vim +num filename`, for example `vim +578 filename` jumps to line 578 in the named file. `vim + filename` jumps to the last line.
Search: `vim +/pattern filename` will position cursor on the first line containing "pattern".
Other commands: `vim -c command filename` executes the ex (`:`) command. You can run up to 10 commands (up to 10 `-c` options).
Upvotes: 0
|
2018/03/16
| 2,069 | 5,798 |
<issue_start>username_0: I receive data in Excel rows like so:
```
INVOICE NO. 1000001
Plan AAAAA 17371 22.00
Plan BBBBB 31782 0.00
Plan CCCCC 13918 44.00
Total for 1000001 66.00
INVOICE NO. 1000002
Plan AAAAA 31385 0.00
Plan CCCCC 15981 44.00
Total for 1000002 44.00
INVOICE NO. 1000003
Plan BBBBB 13181 0.00
Plan CCCCC 01828 16.00
GARBAGE TEXT
Total for 1000003 16.00
```
I need to make this data display on a separate sheet (or even replace the current sheet) like so:
```
Invoice No | Plan AAAAA | Plan AAAAA Cost | Plan BBBBB | Plan BBBBB Cost | Plan CCCCC | Plan CCCCC Cost | Total
1000001 | Plan AAAAA 17371 | 22.00 | Plan BBBBB 31782 | 0.00 | Plan CCCCC 13918 | 44.00 | 66.00
1000002 | Plan AAAAA 31385 | 0.00 | | | Plan CCCCC 15981 | 44.00 | 44.00
1000003 | | | Plan BBBBB 13181 | 0.00 | Plan CCCCC 01828 | 16.00 | 16.00
```
Some things to note are:
1. I can't change how I receive this data. I wish I could.
2. Each set of invoice starts with `INVOICE NO. XXXXXXX` and ends with `Total for XXXXXXX` followed by its total cost.
3. There are only 3 possible plans AAAAA/BBBBB/CCCCC but not all 3 may appear within an invoice.
4. Sometimes there is garbage text in the middle of the output which needs to be ignored.
I was thinking to create an Excel macro that would go through each line of an Excel sheet, pseudo-code being
1. Create new table with desired headings
2. Find first instance of `INVOICE NO` and add following number to table
3. Check next line, while line does not contain `Total for`, check if contains `Plan AAAAA`, if so add line to table while splitting last part as cost, if not leave black, stay on line and check for Plan BBBBB, repeat for BBBBB and CCCCC
How do I loop through every row doing the above (or is there any better way to accomplish this)?<issue_comment>username_1: You can make use of For-Each loops, here. It'd go s.th. like this:
```
For Each rw In sheet.Rows
If rw.Cells(1, 1).Value = "" Then
...
Next rw
```
If you do that as explained in your own pseudo code I see no reason it shouldn't work. However, this kind of looks like a question for upwork ;)
Upvotes: 0 <issue_comment>username_2: Not so flexible code but it works based from the criteria above:
```
Sub test()
Sheets.Add After:=ActiveSheet
Sheets(2).Range("a1").Value = "Invoice No"
Sheets(2).Range("b1").Value = "Plan AAAAA"
Sheets(2).Range("c1").Value = "Plan AAAAA Cost"
Sheets(2).Range("d1").Value = "Plan BBBBB"
Sheets(2).Range("e1").Value = "Plan BBBBB Cost"
Sheets(2).Range("f1").Value = "Plan CCCCC"
Sheets(2).Range("g1").Value = "Plan CCCCC Cost"
Sheets(2).Range("h1").Value = "Total"
sheet2y = 1
For y = 1 To 10000
If Len(Sheets(1).Cells(y, 1).Value) > 0 Then
If LCase(Sheets(1).Cells(y, 1).Value) Like "*invoice*" Then
If sheet2y > 1 Then
Sheets(2).Cells(sheet2y, 8).Value = Sheets(2).Cells(sheet2y, 3).Value + Sheets(2).Cells(sheet2y, 5).Value + Sheets(2).Cells(sheet2y, 7).Value
End If
sheet2y = sheet2y + 1
Sheets(2).Cells(sheet2y, 1).Value = Trim(Split(LCase(Sheets(1).Cells(y, 1).Value), "no.")(1))
End If
If LCase(Sheets(1).Cells(y, 1).Value) Like "*plan*" Then
For sheet2x = 2 To 6 Step 2
If LCase(Sheets(1).Cells(y, 1).Value) Like "*" & LCase(Sheets(2).Cells(1, sheet2x).Value) & "*" Then
Sheets(2).Cells(sheet2y, sheet2x).Value = Sheets(2).Cells(1, sheet2x).Value & " " & Split(Sheets(1).Cells(y, 1).Value, " ")(2)
Sheets(2).Cells(sheet2y, sheet2x + 1).Value = Trim(Split(Sheets(1).Cells(y, 1).Value, " ")(3))
End If
Next sheet2x
End If
Else
Sheets(2).Cells(sheet2y, 8).Value = Sheets(2).Cells(sheet2y, 3).Value + Sheets(2).Cells(sheet2y, 5).Value + Sheets(2).Cells(sheet2y, 7).Value
Exit For
End If
Next y
End Sub
```
Upvotes: 1 <issue_comment>username_3: As is written in comments in code, I asumed your data is stored in column A starting with `A1` cell. Then, it will be replaced by desired output (you said that is allowed).
Try this code:
```
Sub ParseInvoices()
Dim lsatRow As Long, i As Long, invoiceData As Variant, currentRow As Long, currentData As String
currentRow = 2
'get last row of A column - I assumed that there you store your data
lastrow = Cells(Rows.Count, 1).End(xlUp).Row
'read data
invoiceData = Range("A1:A" & lastrow).Value
'clear data from sheet
Columns(1).Clear
'set up table headers
Cells(1, 1) = "Invoice No"
Cells(1, 2) = "Plan AAAAA"
Cells(1, 3) = "Plan AAAAA Cost"
Cells(1, 4) = "Plan BBBBB"
Cells(1, 5) = "Plan BBBBB Cost"
Cells(1, 6) = "Plan CCCCC"
Cells(1, 7) = "Plan CCCCC Cost"
Cells(1, 8) = "Total"
For i = 1 To lastrow
currentData = RTrim(LTrim(invoiceData(i, 1)))
Select Case UCase(Left(currentData, 10))
Case "INVOICE NO"
Cells(currentRow, 1).Value = Mid(currentData, InStrRev(currentData, " "))
Case "PLAN AAAAA"
Cells(currentRow, 2).Value = Left(currentData, InStrRev(currentData, " ") - 1)
Cells(currentRow, 3).Value = Mid(currentData, InStrRev(currentData, " "))
Case "PLAN BBBBB"
Cells(currentRow, 4).Value = Left(currentData, InStrRev(currentData, " ") - 1)
Cells(currentRow, 5).Value = Mid(currentData, InStrRev(currentData, " "))
Case "PLAN CCCCC"
Cells(currentRow, 6).Value = Left(currentData, InStrRev(currentData, " ") - 1)
Cells(currentRow, 7).Value = Mid(currentData, InStrRev(currentData, " "))
Case "TOTAL FOR "
Cells(currentRow, 8).Value = Mid(currentData, InStrRev(currentData, " "))
currentRow = currentRow + 1
End Select
Next
End Sub
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 2,387 | 6,886 |
<issue_start>username_0: In folder `test`, I create `hello.h`,`hello.c`,`main.c`.
My goal is to create a static lib from `hello.h`, `hello.c` and an executable file from the library and `main.c`.
The following is what I have done.
hello.h:
```
#ifndef HELLO_H
#define HELLO_H
void hello(const char* name);
#endif
```
hello.c:
```
#include
void hello(const char\* name){
printf("hello %s! \n",name);
}
```
main.c:
```
#include "hello.h"
int main(){
hello("everyone");
return 0;
}
```
In the terminal (in the `test` folder): I run
```
gcc -c hello.c
ar crv libmyhello.a hello.o // to create a staticlib
gcc -c main.c
ld -o Cuteee hello.o -lmyhello
>>> ld: cannot find -lmyhello
```
I wonder if anything is wrong?<issue_comment>username_1: You need to provide `-L` to let gcc know where to look for your `-l` libraries:
```
gcc -c hello.c
ar crv libmyhello.a hello.o
gcc -c main.c
gcc main.o -L. -lmyhello -o Cuteee
```
To create the final executable it's enough to use gcc, ld is not needed.
See [this question](https://stackoverflow.com/questions/36680470/when-i-should-use-ld-instead-of-gcc) in order to understand why you probably don't need to use `ld` specifically.
Upvotes: 2 <issue_comment>username_2: the following proposed code:
1. corrects several problems in the posted code and in the command line statements.
2. performs the desired operation
3. cleanly compiles/links
And now the proposed changes to the posted code and command line statements:
```
hello.h
#ifndef HELLO_H
#define HELLO_H
void hello( const char* );
#endif
=======================
hello.c:
#include
#include "hello.h"
void hello(const char\* name)
{
printf("hello %s! \n",name);
}
========================
main.c:
#include "hello.h"
int main( void )
{
hello("everyone");
return 0;
}
=========================
In terminal (in test folder):
gcc -Wall -Wextra -Wconversion -pedantic -std=gnu11 -c hello.c -o hello.o -I.
ar crv libmyhello.a hello.o
=========================
gcc -Wall -Wextra -Wconversion -pedantic -std=gnu11 -c main.c -o main.o -I.
ld -static main.o -o Cuteee -L. -lmyhello
=========================
./Cuteee
=========================
this should eliminate the error message:
>>> ld: cannot find -lmyhello
```
Upvotes: 1 <issue_comment>username_3: This takes account of your comments:
>
> Then I tried ld -o Cuteee main.o -L. -lmyhello but still fails with ld: warning: cannot find entry symbol \_start; defaulting to 00000000004000b0 ./libmyhello.a(hello.o): In function 'hello': hello.c:(.text+0x1e): undefined reference to 'printf' I am puzzled again.
>
>
>
`gcc` is the GCC tooldriver for compiling and linking C programs.
When you invoke it with options and inputs that signify you want to a compile a C
source file, say `hello.c`, it first invokes the GNU C compiler, `cc1`, to compile `hello.c`
file to a temporary assembly file, say `/tmp/cc8bfSqS.s`. It quietly adds to the compiler commandline
various boilerplate options that are invariant for compiling C
on your system, to spare you the trouble.
Then it invokes the GNU assembler, `as`, to assemble `/tmp/cc8bfSqS.s` to the object file
`hello.o`.
You can pick out all of this from the compilation output if you ask `gcc` to be verbose,
e.g.
```
gcc -v -c hello.c
```
When you invoke `gcc` with options and inputs that signify you want to link
object files and possibly libraries into a program or shared library,
it invokes the GCC internal tool [`collect2`](https://gcc.gnu.org/onlinedocs/gccint/Collect2.html) to do it - which
in turn invokes the system linker `ld` - and `gcc` quietly adds to the commandline
many boilerplate options, libraries and object files that are always
required for linking a C language program or shared library, once again to
spare you trouble.
You have used `gcc` to compile `hello.c` and `main.c` and allowed it to Do The Right
Thing behind the scenes. You haven't attempted to invoke `cc1` and `as` yourself.
But in contrast, when you come to link your program, you haven't used `gcc`; you've
invoked `ld` yourself, without any of the boilerplate additions to the
the commandline that `gcc` would make. That's why the linkage fails.
If you link your program with `gcc` in verbose mode:
```
gcc -v -o Cuteee main.o -L. -lhello
```
you can pick the `collect2` commandline out of the output, something like:
```
/usr/lib/gcc/x86_64-linux-gnu/7/collect2 \
-plugin /usr/lib/gcc/x86_64-linux-gnu/7/liblto_plugin.so \
-plugin-opt=/usr/lib/gcc/x86_64-linux-gnu/7/lto-wrapper \
-plugin-opt=-fresolution=/tmp/ccgWPdno.res \
-plugin-opt=-pass-through=-lgcc \
-plugin-opt=-pass-through=-lgcc_s \
-plugin-opt=-pass-through=-lc \
-plugin-opt=-pass-through=-lgcc \
-plugin-opt=-pass-through=-lgcc_s \
--sysroot=/ --build-id --eh-frame-hdr -m elf_x86_64 --hash-style=gnu \
--as-needed -dynamic-linker /lib64/ld-linux-x86-64.so.2 \
-pie -z now -z relro -o Cuteee \
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/Scrt1.o \
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crti.o \
/usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o -L. \
-L/usr/lib/gcc/x86_64-linux-gnu/7 \
-L/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu \
-L/usr/lib/gcc/x86_64-linux-gnu/7/../../../../lib \
-L/lib/x86_64-linux-gnu -L/lib/../lib \
-L/usr/lib/x86_64-linux-gnu \
-L/usr/lib/../lib -L/usr/lib/gcc/x86_64-linux-gnu/7/../../.. \
main.o -lhello -lgcc --as-needed -lgcc_s --no-as-needed -lc -lgcc \
--as-needed -lgcc_s --no-as-needed \
/usr/lib/gcc/x86_64-linux-gnu/7/crtendS.o \
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crtn.o
```
All of those options that are passed to `collect2` are passed through to
`ld`. So if you replace `/usr/lib/gcc/x86_64-linux-gnu/7/collect2` with
`ld` in that monster commandline (or rather the commandline you get on
your own system), you will find that it links your program `./Cuteee`.
That is what linking the program with `gcc` does, over an above:
```
ld -o Cuteee hello.o -lmyhello
```
One of the errors that your linkage attempt fails on:
```
cannot find entry symbol _start
```
is due to the fact that you haven't linked `Scrt1.o` (`/usr/lib/x86_64-linux-gnu/Scrt1.o`,
in the commandline above) which contains the C runtime initialization code for
a dynamically linked C program: it defines the symbol `_start`, whose address is the entry point of the program,
to which the loader passes initial control at runtime, and after program initialization
is complete it calls `main`.
The other linkage error:
```
undefined reference to 'printf
```
is due to the fact that you haven't linked the standard C library, `-lc`
(`/lib/x86_64-linux-gnu/libc.so.6`).
Programmers don't link with `ld` directly if they don't have to - e.g.
unless they're targeting an application to a bare-metal environment, and you
can see why.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 703 | 2,563 |
<issue_start>username_0: I am using spring boot to sent an email. Code snippet from my email service
```
private @Autowired JavaMailSender mailSender;
```
AND
```
MimeMessage message = mailSender.createMimeMessage();
MimeMessageHelper helper = new MimeMessageHelper(message,
MimeMessageHelper.MULTIPART_MODE_MIXED_RELATED, StandardCharsets.UTF_8.name());
helper.setTo(bo.getToEmails().parallelStream().toArray(String[]::new));
helper.setBcc(bo.getBccEmails().parallelStream().toArray(String[]::new));
helper.setCc(bo.getCcEmails().parallelStream().toArray(String[]::new));
helper.setText(htmlBody, true);
helper.setText(textBody, false);
helper.setSubject(bo.getSubject());
helper.setFrom(new InternetAddress(bo.getFromEmail(),bo.getSenderLabel()));
```
First I set the htmlBody and then textBody
```
helper.setText(htmlBody, true);
helper.setText(textBody, false);
```
it override htmlBody to textBody . How can I send both text and html body using `org.springframework.mail.javamail.MimeMessageHelper;` any update ?<issue_comment>username_1: You can use `thymeleaf` as your HTML template engine.
Sample HTML code:
`MySampleHTML.html`
```html
Sample Email
```
Sample Java Code:
```java
public class EmailSample {
@Autowired
private JavaMailSender mailSender;
@Autowired
private TemplateEngine templateEngine; // From Thymeleaf
public void initiateEmailSend() {
String processedHTMLTemplate = this.constructHTMLTemplate();
// Start preparing the email
MimeMessagePreparator preparator = message -> {
MimeMessageHelper helper = new MimeMessageHelper(message, MimeMessageHelper.MULTIPART_MODE_MIXED, "UTF-8");
helper.setFrom("Sample ");
helper.setTo("<EMAIL>");
helper.setSubject("Sample Subject");
helper.setText(processedHTMLTemplate, true);
};
mailSender.send(preparator); //send the email
}
// Fills up the HTML file
private String constructHTMLTemplate() {
Context context = new Context();
context.setVariable("sampleText", "My text sample here");
return templateEngine.process("MySampleHTML", context);
}
}
```
And include `thymeleaf` on your `pom.xml`
```xml
org.springframework.boot
spring-boot-starter-thymeleaf
```
NOTE: Put the `MySampleHTML.html` file on `resources/templates/` folder for `thymeleaf` to see it.
Upvotes: 0 <issue_comment>username_2: instead of
```
helper.setText(htmlBody, true);
helper.setText(textBody, false);
```
Use
```
helper.setText(textBody, htmlBody);
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 237 | 774 |
<issue_start>username_0: I have a string of sequence say "aby,abraham,issac,rebecca,job,david,daniel" now I need to add space after the comma.
I bind the value using `ng-bind` and display the result using `ng-show`. I'm unable to use join as it is received as array from the database.<issue_comment>username_1: You can use filter to do this task
```
.filter('commaSpace', [function () {
return function (str) {
return str.replace(',/g', ', ');
};
}])
```
Upvotes: -1 [selected_answer]<issue_comment>username_2: You can directly bind like this,
```
```
No need to create filters.
To initialize the value, use `ng-init` directive to hardcode the value to Username
>
> ng-init="Username='aby,abraham,issac,rebecca,job,david,daniel'"
>
>
>
Upvotes: 0
|
2018/03/16
| 346 | 1,234 |
<issue_start>username_0: In my Jhipster Application when I create new user it sends email for reset password, users getting email but I want to display one image in that email so it looks good
I have written fillowing line in ActivationEmail.html to add image but When user didn't get image in email.
```
![Mountain View]()
```
**please help me and suggest me right code to add image with Email ?**<issue_comment>username_1: Creating a user through the User Management page uses the `creationEmail.html` template. Signing up through the register page uses the `activationEmail.html` template.
You can add images in your JHipster email templates by using a regular image tag:
```

```
For example, the image should be displayed in the email like in this screenshot:
[](https://i.stack.imgur.com/7bEPk.png)
Upvotes: 1 <issue_comment>username_2: ```
JHipster activation

Dear
Your JHipster account has been created, please click on the URL below to activate it:
Activation Link
Regards,
*JHipster.*
```
Upvotes: 0
|
2018/03/16
| 264 | 939 |
<issue_start>username_0: I have a table as activities with column id,title,content,posted\_date and I want to fetch record like the row would be displayed at the top of which I have specified the id and there after all the rows should be displayed. I have written the following query but It's throwing an error.
```
SELECT * FROM `activities` limit 0,4 put top WHERE id = 4
```<issue_comment>username_1: You can use conditional order by clause
```
SELECT *
FROM `activities`
ORDER BY id = 4 DESC
LIMIT 0,4
```
Or
```
SELECT *
FROM `activities`
ORDER BY CASE WHEN id = 4 THEN 1 ELSE 0 END DESC
LIMIT 0,4
```
Upvotes: 0 <issue_comment>username_2: You can order by `id=4` ...
Add `, posted_date asc` to order the rest or ommit if you don't care the order of the rest..
```
select
*
from
activities
order by
id=4 desc,
posted_date asc
limit 4
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 3,710 | 14,199 |
<issue_start>username_0: This problem has been successfully resolved. I am editing my post to document my experience for posterity and future reference.
The Task
--------
I have 117 PDF files (average size ~238 KB) uploaded to Google Drive. I want to convert them all to Google Docs and keep them in a different Drive folder.
The Problem
-----------
I attempted to convert the files using [Drive.Files.insert](https://developers.google.com/drive/v2/reference/files/insert). However, under most circumstances, only 5 files could be converted this way before the function expires prematurely with this error
>
> Limit Exceeded: DriveApp. (line #, file "Code")
>
>
>
where the line referenced above is when the `insert` function is called. After calling this function for the first time, subsequent calls typically failed immediately with no additional google doc created.
Approach
--------
I used 3 main ways to achieve my goal. One was using the [Drive.Files.insert](https://developers.google.com/drive/v2/reference/files/insert), as mentioned above. The other two involved using [Drive.Files.copy](https://developers.google.com/drive/v2/reference/files/copy) and sending a [batch of HTTP requests](https://developers.google.com/drive/v3/web/batch). These last two methods were suggested by Tanaike, and I recommend reading his answer below for more information. The `insert` and `copy` functions are from [Google Drive REST v2 API](https://developers.google.com/drive/v2/web/about-sdk), while batching multiple HTTP requests is from Drive REST v3.
With [Drive.Files.insert](https://developers.google.com/drive/v2/reference/files/insert), I experienced issues dealing with execution limitations (explained in the Problem section above). One solution was to run the functions multiple times. And for that, I needed a way to keep track of which files were converted. I had two options for this: using a spreadsheet and a [continuation token](https://developers.google.com/apps-script/reference/drive/file-iterator#getContinuationToken()). Therefore, I had 4 different methods to test: the two mentioned in this paragraph, [batching HTTP requests](https://developers.google.com/drive/v3/web/batch), and calling [Drive.Files.copy](https://developers.google.com/drive/v2/reference/files/copy).
Because [team drives behave differently from regular drives](https://developers.google.com/drive/v3/web/enable-teamdrives), I felt it necessary to try each of those methods twice, one in which the folder containing the PDFs is a regular non-Team Drive folder and one in which that folder is under a Team Drive. In total, this means I had **8** different methods to test.
These are the exact functions I used. Each of these was used twice, with the only variations being the ID of the source and destination folders (for reasons stated above):
Method A: Using [Drive.Files.insert](https://developers.google.com/drive/v2/reference/files/insert) and a spreadsheet
---------------------------------------------------------------------------------------------------------------------
```
function toDocs() {
var sheet = SpreadsheetApp.openById(/* spreadsheet id*/).getSheets()[0];
var range = sheet.getRange("A2:E118");
var table = range.getValues();
var len = table.length;
var resources = {
title: null,
mimeType: MimeType.GOOGLE_DOCS,
parents: [{id: /* destination folder id */}]
};
var count = 0;
var files = DriveApp.getFolderById(/* source folder id */).getFiles();
while (files.hasNext()) {
var blob = files.next().getBlob();
var blobName = blob.getName();
for (var i=0; i
```
Method B: Using [Drive.Files.insert](https://developers.google.com/drive/v2/reference/files/insert) and a [continuation token](https://developers.google.com/apps-script/reference/drive/file-iterator#getContinuationToken())
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
```
function toDocs() {
var folder = DriveApp.getFolderById(/* source folder id */);
var sprop = PropertiesService.getScriptProperties();
var contToken = sprop.getProperty("contToken");
var files = contToken ? DriveApp.continueFileIterator(contToken) : folder.getFiles();
var options = {
ocr: true
};
var resource = {
title: null,
mimeType: null,
parents: [{id: /* destination folder id */}]
};
while (files.hasNext()) {
var blob = files.next().getBlob();
resource.title = blob.getName();
resource.mimeType = blob.getContentType();
Drive.Files.insert(resource, blob, options); // Limit Exceeded: DriveApp. (line 113, file "Code")
sprop.setProperty("contToken", files.getContinuationToken());
}
}
```
Method C: Using [Drive.Files.copy](https://developers.google.com/drive/v2/reference/files/copy)
-----------------------------------------------------------------------------------------------
Credit for this function goes to Tanaike -- see his answer below for more details.
```
function toDocs() {
var sourceFolderId = /* source folder id */;
var destinationFolderId = /* destination folder id */;
var files = DriveApp.getFolderById(sourceFolderId).getFiles();
while (files.hasNext()) {
var res = Drive.Files.copy({parents: [{id: destinationFolderId}]}, files.next().getId(), {convert: true, ocr: true});
Logger.log(res)
}
}
```
Method D: Sending [batches of HTTP requests](https://developers.google.com/drive/v3/web/batch)
----------------------------------------------------------------------------------------------
Credit for this function goes to Tanaike -- see his answer below for more details.
```
function toDocs() {
var sourceFolderId = /* source folder id */;
var destinationFolderId = /* destination folder id */;
var files = DriveApp.getFolderById(sourceFolderId).getFiles();
var rBody = [];
while (files.hasNext()) {
rBody.push({
method: "POST",
endpoint: "https://www.googleapis.com/drive/v3/files/" + files.next().getId() + "/copy",
requestBody: {
mimeType: "application/vnd.google-apps.document",
parents: [destinationFolderId]
}
});
}
var cycle = 20; // Number of API calls at 1 batch request.
for (var i = 0; i < Math.ceil(rBody.length / cycle); i++) {
var offset = i * cycle;
var body = rBody.slice(offset, offset + cycle);
var boundary = "xxxxxxxxxx";
var contentId = 0;
var data = "--" + boundary + "\r\n";
body.forEach(function(e){
data += "Content-Type: application/http\r\n";
data += "Content-ID: " + ++contentId + "\r\n\r\n";
data += e.method + " " + e.endpoint + "\r\n";
data += e.requestBody ? "Content-Type: application/json; charset=utf-8\r\n\r\n" : "\r\n";
data += e.requestBody ? JSON.stringify(e.requestBody) + "\r\n" : "";
data += "--" + boundary + "\r\n";
});
var options = {
method: "post",
contentType: "multipart/mixed; boundary=" + boundary,
payload: Utilities.newBlob(data).getBytes(),
headers: {'Authorization': 'Bearer ' + ScriptApp.getOAuthToken()},
muteHttpExceptions: true,
};
var res = UrlFetchApp.fetch("https://www.googleapis.com/batch", options).getContentText();
// Logger.log(res); // If you use this, please remove the comment.
}
}
```
What Worked and What Didn't
---------------------------
* None of the functions using [Drive.Files.insert](https://developers.google.com/drive/v2/reference/files/insert) worked. Every
function using `insert` for conversion failed with this error
>
> Limit Exceeded: DriveApp. (line #, file "Code")
>
>
>
(line number replaced with generic symbol). No further details or
description of the error could be found. A notable variation was one
in which I used a spreadsheet and the PDFs were in a team drive
folder; while all other methods failed instantly without converting a
single file, this one converted 5 before failing. However, when
considering why this variation did better than the others, I think it
was more of a fluke than any reason related to the use of particular
resources (spreadsheet, team drive, etc.)
* Using [Drive.Files.copy](https://developers.google.com/drive/v2/reference/files/copy) and [batch HTTP requests](https://developers.google.com/drive/v3/web/batch) worked only
when the source folder was a personal (non-Team Drive) folder.
* Attempting to use the `copy` function while reading from a Team Drive
folder fails with this error:
>
> File not found: 1RAGxe9a\_-euRpWm3ePrbaGaX5brpmGXu (line #, file "Code")
>
>
>
(line number replaced with generic symbol). The line being referenced
is
```
var res = Drive.Files.copy({parents: [{id: destinationFolderId}]}, files.next().getId(), {convert: true, ocr: true});
```
* Using [batch HTTP requests](https://developers.google.com/drive/v3/web/batch) while reading from a Team Drive folder
does nothing -- no doc files are created and no errors are thrown.
Function silently terminates without having accomplished anything.
Conclusion
----------
If you wish to convert a large number of PDFs to google docs or text files, then use [Drive.Files.copy](https://developers.google.com/drive/v2/reference/files/copy) or [send batches of HTTP requests](https://developers.google.com/drive/v3/web/batch) and make sure that the PDFs are stored in a personal drive rather than a Team Drive.
---
*Special thanks to @tehhowch for taking such an avid interest in my question and for repeatedly coming back to provide feedback, and to @Tanaike for providing code along with explanations that successfully solved my problem (with a caveat, read above for details).*<issue_comment>username_1: You can first of all fetch and store id of all files in a google sheet. Then you can proceed with processing each file normally by using it's id. Then after you have processed them mark that file as processed. And before processing a file check if that file is already processed.
If there are several files then you can also store the row number till where you have processed, next time continue after that.
Then at last create a trigger to execute your function every 10 minutes or so.
By this you can overcome execution time limit for single execution. API request quota and all will not be by-passed by this method.
Upvotes: 1 <issue_comment>username_2: You want to convert from PDF files in the folder to Google Documents. PDF files are in a folder of team drive. You want to import converted them to a folder of your Google Drive. If my understanding is correct, how about this method?
For the conversion from PDF to Google Document, it can convert using not only `Drive.Files.insert()`, but also `Drive.Files.copy()`. The advantage of use of `Drive.Files.copy()` is
* Although `Drive.Files.insert()` has the size limitation of 5 MB, `Drive.Files.copy()` can use over the size of 5 MB.
* In my envoronment, the process speed was faster than `Drive.Files.insert()`.
For this method, I would like to propose the following 2 patterns.
Pattern 1 : Using Drive API v2
------------------------------
In this case, Drive API v2 of Advanced Google Services is used for converting files.
```
function myFunction() {
var sourceFolderId = "/* source folder id */";
var destinationFolderId = "/* dest folder id */";
var files = DriveApp.getFolderById(sourceFolderId).getFiles();
while (files.hasNext()) {
var res = Drive.Files.copy({parents: [{id: destinationFolderId}]}, files.next().getId(), {convert: true, ocr: true});
// Logger.log(res) // If you use this, please remove the comment.
}
}
```
Pattern 2 : Using Drive API v3
------------------------------
In this case, Drive API v3 is used for converting files. And here, I used the batch requests for this situation. Because the batch requests can use 100 API calls by one API call. By this, the issue of API quota can be removed.
```
function myFunction() {
var sourceFolderId = "/* source folder id */";
var destinationFolderId = "/* dest folder id */";
var files = DriveApp.getFolderById(sourceFolderId).getFiles();
var rBody = [];
while (files.hasNext()) {
rBody.push({
method: "POST",
endpoint: "https://www.googleapis.com/drive/v3/files/" + files.next().getId() + "/copy",
requestBody: {
mimeType: "application/vnd.google-apps.document",
parents: [destinationFolderId]
}
});
}
var cycle = 100; // Number of API calls at 1 batch request.
for (var i = 0; i < Math.ceil(rBody.length / cycle); i++) {
var offset = i * cycle;
var body = rBody.slice(offset, offset + cycle);
var boundary = "xxxxxxxxxx";
var contentId = 0;
var data = "--" + boundary + "\r\n";
body.forEach(function(e){
data += "Content-Type: application/http\r\n";
data += "Content-ID: " + ++contentId + "\r\n\r\n";
data += e.method + " " + e.endpoint + "\r\n";
data += e.requestBody ? "Content-Type: application/json; charset=utf-8\r\n\r\n" : "\r\n";
data += e.requestBody ? JSON.stringify(e.requestBody) + "\r\n" : "";
data += "--" + boundary + "\r\n";
});
var options = {
method: "post",
contentType: "multipart/mixed; boundary=" + boundary,
payload: Utilities.newBlob(data).getBytes(),
headers: {'Authorization': 'Bearer ' + ScriptApp.getOAuthToken()},
muteHttpExceptions: true,
};
var res = UrlFetchApp.fetch("https://www.googleapis.com/batch", options).getContentText();
// Logger.log(res); // If you use this, please remove the comment.
}
}
```
Note :
------
* If the number of API calls at 1 batch request is large (the current value is 100), please modify `var cycle = 100`.
* If Drive API v3 cannot be used for team drive, please tell me. I can convert it for Drive API v2.
* If the team drive is the reason of issue for your situation, can you try this after it copied PDF files to your Google Drive?
Reference :
-----------
* [Batching Requests](https://developers.google.com/drive/v3/web/batch)
If these are not useful for you, I'm sorry.
Upvotes: 4 [selected_answer]
|
2018/03/16
| 2,512 | 9,417 |
<issue_start>username_0: After experimenting with my VSCode key bindings,
I would like to reset them to the original settings.
How do I do that?
I am on Linux Mint 18.
I tried removing all the records from the keybindings.json<issue_comment>username_1: Try this documentation page about key binding in VSCode:
<https://code.visualstudio.com/docs/getstarted/keybindings>
Open a directory that contains user settings (<https://code.visualstudio.com/docs/getstarted/settings>) and try to remove user key bindings file.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here are the steps to reset the keybindings in VS code.
1. Click **File > Preferences > Keyboard Shortcuts** or Press **Ctrl+K Ctrl+S**
[](https://i.stack.imgur.com/BoboQ.png)
2. Then, click on **keybindings.json**
[](https://i.stack.imgur.com/FaDTI.png)
3. From **keybindings.json** remove the custom bindings you want to reset.
Upvotes: 5 <issue_comment>username_3: It seems newer versions of VSCode (>1.33 for Mac) doesn't have a direct link to `keybindings.json` anymore, as this answer [shows](https://stackoverflow.com/a/51232593/7948731). However, there is an option to reset user defined keybindings without mess with files.
Go to the Keyboard shortcuts settings:
[](https://i.stack.imgur.com/YBEQU.png)
There, find the setting tagged as "User". If you click on it with the right mouse button, a context menu will show the option "Reset Keybinding":
[](https://i.stack.imgur.com/gSZQr.png)
This action is gonna reset the selected keybinding and tag it with "Default" again.
Upvotes: 5 <issue_comment>username_4: Version 1.34.0 seems to have the settings at a slightly different location:
1. Click File > Preferences > Keyboard Shortcuts
2. There is a triple-dot (...) at the top-right hand corner. Click on that and select "Show User Keybindings"
3. Right click on the key-binding that you want to reset and choose "Reset Keybinding"
Upvotes: 7 <issue_comment>username_5: For VSCode Version 1.35.1, which I'm using, one can directly open the `keybindings.json` file using the button that looks like `{}` on top-right corner of "Keyboard Shortcuts" Tab's title bar:
[Picture showing {} button in top-right corner](https://i.stack.imgur.com/0UQVn.png)
Cleaning content of this file cleans all user defined key bindings.
Upvotes: 0 <issue_comment>username_6: first go file > preferences > keyboard shortcuts
you can see all key that you change whit click on triple-dot
or put ( @source:user ) in search bar
now you can right click on which one that you want to reset and select ( reset keybinding )[](https://i.stack.imgur.com/MpjM9.png)
Upvotes: 4 <issue_comment>username_7: For future searchers, since this question refers to Linux, even if the keybindings.json file is moved again one can always just use locate to find it:
`$ locate keybindings.json`.
Chances are, you will only have one, and if you have more, it will be clear where it is, as it is somewhere inside `Code` folder.
For example, as of today, mine is here: `/home/auser/.config/Code/User/keybindings.json`
Going directly to the file, will give you the opportunity to keep what you want and remove what you think might be the problematic setting.
Upvotes: 2 <issue_comment>username_8: If you had installed the Keybinding as an Extension e.g Sublime or IntelliJ IDEA Keybindings, simple go to Extension and disable or uninstall it and you would have your default keybinding.
[](https://i.stack.imgur.com/hpcDV.png)
Upvotes: 3 <issue_comment>username_9: On VS Code version 1.42.1 on Mac you can find a button that opens the Keyboard shortcuts JSON file on the top right corner of the keyboard shortcuts screen which you can open from Code -> Preferences -> Keyboard Shortcuts
[](https://i.stack.imgur.com/4yBcc.png)
Upvotes: 2 <issue_comment>username_10: For the newer version of VSCode (Version: 1.43.1), you can open the `keybindings.json` file from the Command Palette (`⇧⌘P` OR `Ctrl+Shift+P`) with the **Preferences: Open Keyboard Shortcuts (JSON)** command.
Once you delete all the data in the `keybindings.json` file you should get rid of any changes you made to keyboard shortcuts for your installation. Everything would be set back to default.
**Reason:** The first line in `keybindings.json` file is a comment `// Place your key bindings in this file to override the defaultsauto[]`, which means if you delete all what is there you'll get the VSCode defaults. (Ref <https://code.visualstudio.com/docs/getstarted/keybindings#_advanced-customization>)
You can find all information on keybindings [here](https://code.visualstudio.com/docs/getstarted/keybindings).
Upvotes: 2 <issue_comment>username_11: Do we need another answer? Maybe not, but every year or so I find myself sifting through the information on this page, so to make it quicker next time, here are some notes:
To find the location of the settings, you can look for a button/link to the json file located somewhere in Preferences. However, I have found it easier to find the json files on my hard drive than to locate that button/link inside the app (some users report that the button/link is missing in some versions of the app). If your OS does not allow you to search through system files, open a terminal session and type `$ locate keybindings.json`.
If you can memorize shortcuts, a typical default shortcut that can take you to the button/link is **CMD+SHIFT+P**. This shortcut opens a box below the main toolbar and you can type "**json**" in that box to find a button/link to the json file.
General settings are in `settings.json`
Keyboard settings are in `keybindings.json`
MacOS: `~/Library/Application Support/Code/User/`
**Example of keybindings.json**
```
// Place your key bindings in this file to override the defaultsauto[]
[
{
"key": "cmd+r cmd+r",
"command": "workbench.action.reloadWindow",
"when": "isDevelopment"
},
{
"key": "cmd+r",
"command": "-workbench.action.reloadWindow",
"when": "isDevelopment"
},
{
"key": "shift+cmd+c shift+cmd+c",
"command": "workbench.action.terminal.openNativeConsole",
"when": "!terminalFocus"
},
{
"key": "<KEY>",
"command": "-workbench.action.terminal.openNativeConsole",
"when": "!terminalFocus"
},
{
"key": "ctrl+cmd+c",
"command": "editor.action.commentLine",
"when": "editorTextFocus && !editorReadonly"
},
{
"key": "ctrl+shift+alt+cmd+[Minus]",
"command": "-editor.action.commentLine",
"when": "editorTextFocus && !editorReadonly"
},
{
"key": "<KEY>",
"command": "editor.action.blockComment",
"when": "editorTextFocus && !editorReadonly"
},
{
"key": "shift+alt+a",
"command": "-editor.action.blockComment",
"when": "editorTextFocus && !editorReadonly"
}
]
```
Note that mapping a key combination that is already in use may result in conflicts. So the best approach is to first remap that default binding to something else. In the above, for instance, the `"-"` that prefixes `"-editor.action.blockComment"` serves to suppress the default binding. Thus, you may find that your key bindings are best set in pairs (unless your preferred combinations are sufficiently rare).
**Example of settings.json**
```
{
"workbench.colorTheme": "Solarized Light",
"window.zoomLevel": 4,
"workbench.activityBar.visible": false,
"workbench.statusBar.visible": false,
"editor.quickSuggestions": false,
"editor.suggest.snippetsPreventQuickSuggestions": false,
"editor.acceptSuggestionOnCommitCharacter": false
}
```
Upvotes: 3 <issue_comment>username_12: In the latest version, the setting json file is with highlighted button.
I deleted everything in there and it seems to reset all keys.
[User setting file](https://i.stack.imgur.com/BSrx2.png)
Upvotes: 0 <issue_comment>username_13: If you are on a Mac, press and hold command while hitting the k and s keys. Then click the icon on the top right with the three circles and press "Show User Keybindings". Then, hit command + delete while highlighting over the keybinding you want to delete.
Upvotes: 0 <issue_comment>username_14: In visual studio code, if you notice your key shortcut is having a - at the beginning, it means is disabled. Or if is dimmed is also an indicator is disabled. So to re-enable it, go to your .settings json file and remove the - from it
From this
```
{
"key": "ctrl+space",
"command": "-editor.action.triggerSuggest",
"when": "editorHasCompletionItemProvider && textInputFocus && !editorReadonly && !suggestWidgetVisible"
}
```
To this
```
{
"key": "ctrl+space",
"command": "editor.action.triggerSuggest",
"when": "editorHasCompletionItemProvider && textInputFocus && !editorReadonly && !suggestWidgetVisible"
},
```
Upvotes: 0
|
2018/03/16
| 1,870 | 6,256 |
<issue_start>username_0: I am creating a flashcard game for my kid. Its about Dinos. I am having trouble making "Congrats, You got it right" appear on the screen. I have moved my code all over the place but no luck. Can someone please help me out.
To be clear, What I want to happen is when the user presses the number 1,2,3 on the keypad, and if the key is the correct answer that correlates to the question, the message "Congrats, You got it right!" should appear on the screen.
I know the keydown event right now is the return key, but I did that for testing purposes only. This is also the same for testtext variable. I was using that variable to see if I could print "Hello WOrld" to the screen.
I do have a feeling it has something to do with the loop that is running. My guess would be is that it does show up for a fraction of a second but disappears before anyone can see it.
```
import pygame, random
pygame.font.init()
pygame.init()
font = pygame.font.Font(None, 48)
#Created the window display
size = width, height = 800,800
screen = pygame.display.set_mode(size)
#Loads the images of the starting game Trex
#t_rex = pygame.image.load('trex1.png')
##Places the image on the screen
#screen.blit(t_rex,(150,50))
count = 0
score = 0
active = False
testtext = font.render("Hello WOrld", True, (250, 250, 250))
#The below code keeps the display window open until user decides to quie app
crashed = False
while not crashed:
for event in pygame.event.get():
if event.type == pygame.QUIT:
crashed = True
if event.type==pygame.KEYDOWN:
if event.key==pygame.K_RETURN:
screen.blit(testtext, (200,699))
while count < 2:
screen.fill(0)
dinoQuestions = ["Does a t-rex eat meat?\n","Does a trycerotopes have 3 horns?\n"]
dinoAnswer = ["Yes\n", "No\n","Maybe\n"]
wordnum = random.randint(0, len(dinoQuestions)-1)
mainpic = pygame.image.load("trex1.png")
screen.blit(mainpic, (150, 20))
options = [random.randint(0, len(dinoAnswer)-1),random.randint(0, len(dinoAnswer)-1)]
options[random.randint(0,1)] = wordnum
question_display = font.render(dinoQuestions[wordnum].rstrip('\n'),True, (255, 255, 255))
text1 = font.render('1 - ' + dinoAnswer[options[0]].rstrip('\n'),True, (255, 255, 255))
text2 = font.render('2 - ' + dinoAnswer[options[1]].rstrip('\n'),True, (255, 255, 255))
#the below code is for testing purposes only
screen.blit(question_display,(200, 590))
screen.blit(text1, (200, 640))
screen.blit(text2, (200, 690))
count = count + 1
pygame.display.flip()
```<issue_comment>username_1: I am a bit confused about what your exact problem is, so I'm going to try to answer. You say that you want the words "Congrats, , you got it right!", so I can help you with what went wrong. You blit the testtext before you color the screen, so each time the loop loops, it displays the testtext but then almost instantly covers it up with screen.fill(0). To make it better, you should put the blitting of the text after the screen is colored. The best way to do this is to put it right at the start of the loop, or making another event detector after the current position of the screen.fill in the code.
Also, I would get rid of the stacked while loop, and instead replace it with if statement because it is already in the while loop.
Is this what you were looking for?
Upvotes: 0 <issue_comment>username_2: The blit to your screen surface you perform when handling a `Return` key down event is overwritten when you later call `screen.fill(0)`.
I've rearranged your code a little and added displaying a result on appropriate key press.
```
import pygame
username_2
pygame.init()
pygame.font.init()
font = pygame.font.Font(None, 48)
size = width, height = 800,800
screen = pygame.display.set_mode(size) #Created the window display
count = 0
score = 0
active = False
white = pygame.color.Color("white")
black = pygame.color.Color("black")
green = pygame.color.Color("green")
# load/create static resources once
mainpic = pygame.image.load("trex1.png")
testtext = font.render("Hello World", True, (250, 250, 250))
correct_text = font.render("Correct! Well Done!", True, green)
clock = pygame.time.Clock() # for limiting FPS
dinoQuestions = ["Does a t-rex eat meat?","Does a triceratops have 3 horns?"]
dinoAnswer = ["Yes", "No","Maybe"]
# initialise state
show_hello = False
show_correct = False
update_questions = True # need to update questions on the first iteration
finished = False
while not finished:
for event in pygame.event.get():
if event.type == pygame.QUIT:
finished = True
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_RETURN:
show_hello = not show_hello # toggle flag for later display
elif event.key == pygame.K_SPACE:
update_questions = True
elif event.key in [pygame.K_1, pygame.K_2]:
# an answer has been selected
# pygame.K_1 is 0, pygame.K_2 is 1
if dinoAnswer[event.key - pygame.K_1] == "Yes":
show_correct = True
count += 1
else:
show_correct = False
screen.fill(black)
screen.blit(mainpic, (150, 20))
if show_hello:
screen.blit(testtext, (200,199))
if show_correct:
screen.blit(correct_text, (200, 300))
if update_questions:
random.shuffle(dinoQuestions)
random.shuffle(dinoAnswer)
question_display = font.render(dinoQuestions[0],True, white)
text1 = font.render('1 - ' + dinoAnswer[0],True, white)
text2 = font.render('2 - ' + dinoAnswer[1],True, white)
update_questions = False
show_correct = False
# Display the Question
screen.blit(question_display,(200, 590))
screen.blit(text1, (200, 640))
screen.blit(text2, (200, 690))
# count = count + 1
pygame.display.flip()
clock.tick(60)
```
Hopefully this is enough of a framework for you to extend.
Let me know if you have any questions about any portions of the code.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,876 | 6,459 |
<issue_start>username_0: I am currently working on a personal project using the Ruby language in Cloud9 IDE. Recently, I came across an environment/programming language called Processing. My goals is to make Processing available for my use in Cloud9 IDE.
I am a complete novice and have no clue what I am doing. I followed directions from the following website: [Directions for setting up Processing](https://www.sitepoint.com/drawing-processing-ruby/)
I entered the following commands into Cloud9's terminal:
```
rvm install jruby
rvm use jruby
gem install ruby-processing
PROCESSING_ROOT: "/Applications/Processing.app/Contents/Java"
```
Entering in `PROCESSING_ROOT: "/Applications/Processing.app/Contents/Java"`
outputted message bash: PROCESSING\_ROOT:: command not found
I wasn't too surprised by this, since the last entry probably needs information specific to my setup. Here are my questions:
1.) Are these directions correct for what I am trying to do? (get Processing to work in Cloud9 IDE). If they are not correct, can someone please point me to the correct directions, or give me directions?
2.) If these directions are correct, how can I successfully finish the last step? I tried running DrawRuby.rb, a simple program copied from the internet that supposedly made use of Processing. When I ran the code, the message "You need to set PROCESSING\_ROOT in ~/.rpsrc" appeared. I am not sure if this piece of information is relevant or not, but I figured I would add it.
This is where I am at, and I am completely stuck. If someone could give me some help, I would be very grateful. Also, please make your explanation easy to understand. I am relatively new to the programming world, and may not necessarily understand terminology or how things should work.
Cheers!
\*\*\* Edit: I created a new workspace in Cloud9 IDE and tried Jed's suggestion. This is what happened:`echo 'PROCESSING_ROOT: "/Applications/Processing.app/Contents/Java"' > ~/.rpsrc
:~/workspace $ rp5 run Draw.rb
WARNING: you need to set PROCESSING_ROOT in ~/.rp5rc
NameError: uninitialized constant Processing::RP_CONFIG
Did you mean? Config
RbConfig
const_missing at org/jruby/RubyModule.java:3344
spin_up at /usr/local/rvm/gems/jruby-9.1.7.0/gems/ruby-processing-2.7.1/lib/ruby-processing/runner.rb:188
run at /usr/local/rvm/gems/jruby-9.1.7.0/gems/ruby-processing-2.7.1/lib/ruby-processing/runner.rb:105
execute! at /usr/local/rvm/gems/jruby-9.1.7.0/gems/ruby-processing-2.7.1/lib/ruby-processing/runner.rb:67
execute at /usr/local/rvm/gems/jruby-9.1.7.0/gems/ruby-processing-2.7.1/lib/ruby-processing/runner.rb:61
at /usr/local/rvm/gems/jruby-9.1.7.0/gems/ruby-processing-2.7.1/bin/rp5:10
load at org/jruby/RubyKernel.java:979
at /usr/local/rvm/gems/jruby-9.1.7.0/bin/rp5:1
eval at org/jruby/RubyKernel.java:1000
at /usr/local/rvm/gems/jruby-9.1.7.0/bin/jruby\_executable\_hooks:15`<issue_comment>username_1: I am a bit confused about what your exact problem is, so I'm going to try to answer. You say that you want the words "Congrats, , you got it right!", so I can help you with what went wrong. You blit the testtext before you color the screen, so each time the loop loops, it displays the testtext but then almost instantly covers it up with screen.fill(0). To make it better, you should put the blitting of the text after the screen is colored. The best way to do this is to put it right at the start of the loop, or making another event detector after the current position of the screen.fill in the code.
Also, I would get rid of the stacked while loop, and instead replace it with if statement because it is already in the while loop.
Is this what you were looking for?
Upvotes: 0 <issue_comment>username_2: The blit to your screen surface you perform when handling a `Return` key down event is overwritten when you later call `screen.fill(0)`.
I've rearranged your code a little and added displaying a result on appropriate key press.
```
import pygame
username_2
pygame.init()
pygame.font.init()
font = pygame.font.Font(None, 48)
size = width, height = 800,800
screen = pygame.display.set_mode(size) #Created the window display
count = 0
score = 0
active = False
white = pygame.color.Color("white")
black = pygame.color.Color("black")
green = pygame.color.Color("green")
# load/create static resources once
mainpic = pygame.image.load("trex1.png")
testtext = font.render("Hello World", True, (250, 250, 250))
correct_text = font.render("Correct! Well Done!", True, green)
clock = pygame.time.Clock() # for limiting FPS
dinoQuestions = ["Does a t-rex eat meat?","Does a triceratops have 3 horns?"]
dinoAnswer = ["Yes", "No","Maybe"]
# initialise state
show_hello = False
show_correct = False
update_questions = True # need to update questions on the first iteration
finished = False
while not finished:
for event in pygame.event.get():
if event.type == pygame.QUIT:
finished = True
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_RETURN:
show_hello = not show_hello # toggle flag for later display
elif event.key == pygame.K_SPACE:
update_questions = True
elif event.key in [pygame.K_1, pygame.K_2]:
# an answer has been selected
# pygame.K_1 is 0, pygame.K_2 is 1
if dinoAnswer[event.key - pygame.K_1] == "Yes":
show_correct = True
count += 1
else:
show_correct = False
screen.fill(black)
screen.blit(mainpic, (150, 20))
if show_hello:
screen.blit(testtext, (200,199))
if show_correct:
screen.blit(correct_text, (200, 300))
if update_questions:
random.shuffle(dinoQuestions)
random.shuffle(dinoAnswer)
question_display = font.render(dinoQuestions[0],True, white)
text1 = font.render('1 - ' + dinoAnswer[0],True, white)
text2 = font.render('2 - ' + dinoAnswer[1],True, white)
update_questions = False
show_correct = False
# Display the Question
screen.blit(question_display,(200, 590))
screen.blit(text1, (200, 640))
screen.blit(text2, (200, 690))
# count = count + 1
pygame.display.flip()
clock.tick(60)
```
Hopefully this is enough of a framework for you to extend.
Let me know if you have any questions about any portions of the code.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 2,197 | 7,786 |
<issue_start>username_0: I have tried to access the keycloak API from the postman. but it is showing 400 bad request.
I was calling api in the below format.
```
http://{hostname}:8080/auth/realms/master/protocol/openid-connect/token?username=admin&password=<PASSWORD>&client_id=admin-cli&grant_type=password
```
In the headers I have set the `content_type as application/x-www-form-urlencoded`
I am getting the response as below.
```
{
"error": "invalid_request",
"error_description": "Missing form parameter: grant_type"
}
```
Can any one help me.Any help will be appreciated. thanks in advance<issue_comment>username_1: The URL you are using is to obtain the token.
The token request should be a POST call, the request you post is a GET request. Below a CURL example about how to request the `access_token`
```
curl -X POST \
http://{hostname}:8080/auth/realms/{realm}/protocol/openid-connect/token \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'username=admin&password=<PASSWORD>&grant_type=password&client_id=admin-cli'
```
Upvotes: 2 <issue_comment>username_2: A bit late for this question, but you did ask about postman and not curl.
So you have to put the options in x-www-form-urlencoded
[](https://i.stack.imgur.com/5aFgH.png)
Upvotes: 6 [selected_answer]<issue_comment>username_3: You call API through POST client
**URL** - http://localhost:8080/auth/realms/Demo/protocol/openid-connect/token
So here in above url i am using `Demo` as my realm instead of `master`.
**ContentType** - "Content-Type":"application/x-www-form-urlencoded"
**Params**:
```
{
"client_secret" : "90ec9638-7647-4e65-ad20-b<PASSWORD>",
"username" : "ankur",
"password" : "<PASSWORD>",
"grant_type" : "password",
"client_id": "app-client"
}
```
**Set Header as below**
[](https://i.stack.imgur.com/RuBCw.png)
**Data Need to be passed as shown below**
[](https://i.stack.imgur.com/rjMxe.png)
Upvotes: 4 <issue_comment>username_4: You can also use CURL to get information
```
curl -L -X POST 'http:///auth/realms//protocol/openid-connect/token' -H 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'client\_id=' --data-urlencode 'grant\_type=password' --data-urlencode 'client\_secret=' --data-urlencode 'scope=openid' --data-urlencode 'username=' --data-urlencode 'password='
```
Upvotes: 0 <issue_comment>username_5: did I create a Postman collection to help us to get start with keycloak API. Anyone can save the follow json, and import on Postman:
```
{
"info": {
"_postman_id": "07a9d691-5b1c-4869-990b-551da29590fe",
"name": "Keycloak",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
},
"item": [
{
"name": "GET REALM",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{KEYCLOAK_URL}}admin/realms/{{KEYCLOAK_REALM}}",
"host": [
"{{KEYCLOAK_URL}}admin"
],
"path": [
"realms",
"{{KEYCLOAK_REALM}}"
]
}
},
"response": []
},
{
"name": "GET USERS",
"event": [
{
"listen": "prerequest",
"script": {
"id": "dfda403a-35b8-4704-840d-102eddac32e6",
"exec": [
""
],
"type": "text/javascript"
}
}
],
"protocolProfileBehavior": {
"disableBodyPruning": true
},
"request": {
"method": "GET",
"header": [],
"body": {
"mode": "urlencoded",
"urlencoded": []
},
"url": {
"raw": "{{KEYCLOAK_URL}}admin/realms/{{KEYCLOAK_REALM}}/users",
"host": [
"{{KEYCLOAK_URL}}admin"
],
"path": [
"realms",
"{{KEYCLOAK_REALM}}",
"users"
]
}
},
"response": []
}
],
"auth": {
"type": "bearer",
"bearer": [
{
"key": "token",
"value": "{{KEYCLOAK_TOKEN}}",
"type": "string"
}
]
},
"event": [
{
"listen": "prerequest",
"script": {
"id": "c3ae5df7-b1e0-4af1-988b-c592df3fd98e",
"type": "text/javascript",
"exec": [
"const echoPostRequest = {",
" url: pm.environment.get('KEYCLOAK_URL') + 'realms/master/protocol/openid-connect/token',",
" method: 'POST',",
" header: 'Content-Type:application/x-www-form-urlencoded',",
" body: {",
" mode: 'urlencoded',",
" urlencoded: [",
" {key:'username', value:pm.environment.get('KEYCLOAK_USER')}, ",
" {key:'password', value:pm.environment.get('KEYCLOAK_PASSWORD')}, ",
" {key:'client_id', value:'admin-cli'}, ",
" {key:'grant_type', value:'password'}",
" ]",
" }",
"};",
"",
"var getToken = true;",
"",
"if (!pm.environment.get('KEYCLOAK_TOKEN_EXPIRY') || ",
" !pm.environment.get('KEYCLOAK_TOKEN')) {",
" console.log('Token or expiry date are missing')",
"} else if (pm.environment.get('KEYCLOAK_TOKEN_EXPIRY') <= (new Date()).getTime()) {",
" console.log('Token is expired')",
"} else {",
" getToken = false;",
" console.log('Token and expiry date are all good');",
"}",
"",
"if (getToken === true) {",
" pm.sendRequest(echoPostRequest, function (err, res) {",
" console.log(err ? err : res.json());",
" if (err === null) {",
" console.log('Saving the token and expiry date')",
" var responseJson = res.json();",
" pm.environment.set('KEYCLOAK_TOKEN', responseJson.access_token)",
" ",
" var expiryDate = new Date();",
" expiryDate.setSeconds(expiryDate.getSeconds() + responseJson.expires_in);",
" pm.environment.set('KEYCLOAK_TOKEN_EXPIRY', expiryDate.getTime());",
" }",
" });",
"}"
]
}
},
{
"listen": "test",
"script": {
"id": "fdb69bb4-14a5-43b4-97e2-af866643e390",
"type": "text/javascript",
"exec": [
""
]
}
}
],
"variable": [
{
"id": "698bbb41-d3f9-47f8-9848-4a1c32f9cca4",
"key": "token",
"value": ""
}
],
"protocolProfileBehavior": {}}
```
And I created a pre-script to get the token and set on wich request, as you can see on the image below:
[](https://i.stack.imgur.com/TKoKP.png)
You should create the follow environment variables:
KEYCLOAK\_USER, KEYCLOAK\_PASSWORD and KEYCLOAK\_URL, where the url must be https://{your keycloak installation}/auth/
Upvotes: 2
|
2018/03/16
| 569 | 1,747 |
<issue_start>username_0: JDK is already installed there.I have downloaded the apache-tomcat-8.5.29.tar.gz in /opt/tomcat. I want their folder structure webapps and all.
Executed the command `/path/tar xzvf apache-tomcat-8.5.29.tar.gz`, but now also there is tar file present. How do I get folders inside tomcat?<issue_comment>username_1: The error says:
tar (child): **apache-tomcat-8.5.29.tar: Cannot open: No such file or directory**
It's means that the file name is incorrect or is not on the directory you are.
Try this
tar xzvf /PATH\_OF\_THE\_FILE/apache-tomcat-8.5.29.tar.gz
also check the name of the file you typed.
Upvotes: 0 <issue_comment>username_1: when you extract the tar.gz file, you should see an output like this:
```
...... (lot of information)
apache-tomcat-8.5.29/bin/configtest.sh
apache-tomcat-8.5.29/bin/daemon.sh
apache-tomcat-8.5.29/bin/digest.sh
apache-tomcat-8.5.29/bin/setclasspath.sh
apache-tomcat-8.5.29/bin/shutdown.sh
......
```
And a new directory must be created called **apache-tomcat-8.5.29**, now you only have to type the following command:
cd apache-tomcat-8.5.29
if you need to list it, then execute this
ls -l
Upvotes: 0 <issue_comment>username_2: Done:
```
sudo tar xvzf /opt/file_path/apache-tomcat-7.0.54.tar.gz -C /opt/extracted_folder_path --strip components=1--
```
It will extract tar file into extracted\_folder\_path.
Upvotes: 2 [selected_answer]<issue_comment>username_3: Before unpack with command tar try this:
```bash
sudo chmod 777 ./apache-tomcat-8.5.85
```
you need to perform this action before, because after download tar file is colored *red* and you need to make it *green*. You achive this result with `chmod 777` which change permission on the file.
Upvotes: 0
|
2018/03/16
| 747 | 2,219 |
<issue_start>username_0: I have a dataset like so:
```
df<-data.frame(x=c("A","A","A","A", "B","B","B","B","B",
"C","C","C","C","C","D","D","D","D","D"),
y= as.factor(c(rep("Eoissp2",4),rep("Eoissp1",5),"Eoissp1","Eoisp4","Automerissp1","Automerissp2","Acharias",rep("Eoissp2",3),rep("Eoissp1",2))))
```
I want to identify, for each subset of `x`, the corresponding levels in `y` that are entirely duplicates containing the expression `Eois`. Therefore, `A` , `B`, and `D` will be returned in a vector because every level of `A` , `B`, and `D` contains the expression `Eois` , while level `C` consists of various unique levels (e.g. Eois, Automeris and Acharias). For this example the output would be:
```
output<- c("A", "B", "D")
```<issue_comment>username_1: The error says:
tar (child): **apache-tomcat-8.5.29.tar: Cannot open: No such file or directory**
It's means that the file name is incorrect or is not on the directory you are.
Try this
tar xzvf /PATH\_OF\_THE\_FILE/apache-tomcat-8.5.29.tar.gz
also check the name of the file you typed.
Upvotes: 0 <issue_comment>username_1: when you extract the tar.gz file, you should see an output like this:
```
...... (lot of information)
apache-tomcat-8.5.29/bin/configtest.sh
apache-tomcat-8.5.29/bin/daemon.sh
apache-tomcat-8.5.29/bin/digest.sh
apache-tomcat-8.5.29/bin/setclasspath.sh
apache-tomcat-8.5.29/bin/shutdown.sh
......
```
And a new directory must be created called **apache-tomcat-8.5.29**, now you only have to type the following command:
cd apache-tomcat-8.5.29
if you need to list it, then execute this
ls -l
Upvotes: 0 <issue_comment>username_2: Done:
```
sudo tar xvzf /opt/file_path/apache-tomcat-7.0.54.tar.gz -C /opt/extracted_folder_path --strip components=1--
```
It will extract tar file into extracted\_folder\_path.
Upvotes: 2 [selected_answer]<issue_comment>username_3: Before unpack with command tar try this:
```bash
sudo chmod 777 ./apache-tomcat-8.5.85
```
you need to perform this action before, because after download tar file is colored *red* and you need to make it *green*. You achive this result with `chmod 777` which change permission on the file.
Upvotes: 0
|
2018/03/16
| 544 | 1,675 |
<issue_start>username_0: How can we check by code in Opencart 2.0.3 that Cart contains Gift Certificate?
I want to disable cash on delivery (cod) payment mode if cart has Gift Certificate (opencart 2.0.3). Any code or idea to achieve this?<issue_comment>username_1: The error says:
tar (child): **apache-tomcat-8.5.29.tar: Cannot open: No such file or directory**
It's means that the file name is incorrect or is not on the directory you are.
Try this
tar xzvf /PATH\_OF\_THE\_FILE/apache-tomcat-8.5.29.tar.gz
also check the name of the file you typed.
Upvotes: 0 <issue_comment>username_1: when you extract the tar.gz file, you should see an output like this:
```
...... (lot of information)
apache-tomcat-8.5.29/bin/configtest.sh
apache-tomcat-8.5.29/bin/daemon.sh
apache-tomcat-8.5.29/bin/digest.sh
apache-tomcat-8.5.29/bin/setclasspath.sh
apache-tomcat-8.5.29/bin/shutdown.sh
......
```
And a new directory must be created called **apache-tomcat-8.5.29**, now you only have to type the following command:
cd apache-tomcat-8.5.29
if you need to list it, then execute this
ls -l
Upvotes: 0 <issue_comment>username_2: Done:
```
sudo tar xvzf /opt/file_path/apache-tomcat-7.0.54.tar.gz -C /opt/extracted_folder_path --strip components=1--
```
It will extract tar file into extracted\_folder\_path.
Upvotes: 2 [selected_answer]<issue_comment>username_3: Before unpack with command tar try this:
```bash
sudo chmod 777 ./apache-tomcat-8.5.85
```
you need to perform this action before, because after download tar file is colored *red* and you need to make it *green*. You achive this result with `chmod 777` which change permission on the file.
Upvotes: 0
|
2018/03/16
| 541 | 1,968 |
<issue_start>username_0: I'm trying to learn Postgres and Ive made two basic tables and I can't join them together.
here is my list Of relations:
```
Schema | Name | Type | Owner
--------+--------------+----------+----------
public | login | table | postgres
public | login_id_seq | sequence | postgres
public | users | table | test
(3 rows)
```
When I use the command
```
SELECT * FROM users JOIN login ON users.name = login.name;
```
I get
>
> ERROR: permission denied for relation login
>
>
>
I have no idea what to do or what I did wrong.<issue_comment>username_1: You should grant the `SELECT` permission to user *test*:
```
GRANT SELECT ON login TO test;
```
If if might allow *test* to modify `login`, you should grant other permissions as well:
```
GRANT SELECT, INSERT, UPDATE, DELETE ON login TO test;
```
You should execute these statements as database owner or as user *postgres*. In general, you can use
```
psql -Upostgres -dtest
```
if you're running this command on the same machine where the Postgres server is running.
You may also change the ownership of `login` to *test*:
```
ALTER TABLE login OWNER TO test;
ALTER SEQUENCE login_id_seq OWNER TO test;
```
But have to execute this as user *postgres* as well.
**Edit**: You can try to change the user with
```
SET ROLE 'postgres';
```
as suggested by *@username_2*.
Upvotes: 2 <issue_comment>username_2: The "test" user doesn't have permission to login and use the related tables. Run the query with the "postgres" user:
```
SET ROLE 'postgres';
```
Then run your query.
Upvotes: 0 <issue_comment>username_3: So this is what I did to finally get it to work...I basically just went into the login properties on pgAdmin4, found the owner and switched it to test and ran:
`SELECT * FROM users JOIN login ON users.name = login.name;`
and finally got what I was looking for. Surprisingly a simple fix.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 791 | 2,504 |
<issue_start>username_0: I am reading a file with about 13,000 names on it into a list.
Then, I look at each character of each item on that list and if there is a match I remove that line from the list of 13,000.
If I run it once, it removes about half of the list. On the 11th run it seems to cut it down to 9%. Why is this script missing results? Why does it catch them with successive runs?
Using Python 3.
```
with open(fname) as f:
lines = f.read().splitlines()
bad_letters = ['B', 'C', 'F', 'G', 'H', 'J', 'L', 'O', 'P', 'Q', 'U', 'W', 'X']
def clean(callsigns, bad):
removeline = 0
for line in callsigns:
for character in line:
if character in bad:
removeline = 1
if removeline == 1:
lines.remove(line)
removeline = 0
return callsigns
for x in range (0, 11):
lines = clean(lines, bad_letters)
print (len(lines))
```<issue_comment>username_1: You are changing (i.e., mutating) the `lines` array while you're looping (i.e. iterating) over it. This is never a good idea because it means that you are changing something while you're reading it, which leads to you skipping over lines and not removing them in the first go.
There are many ways of fixing this. In the below example, we keep track of which lines to remove, and remove them in a separate loop in a way so that the indices do not change.
```
with open(fname) as f:
lines = f.read().splitlines()
bad_letters = ['B', 'C', 'F', 'G', 'H', 'J', 'L', 'O', 'P', 'Q', 'U', 'W', 'X']
def clean(callsigns, bad):
removeline = 0
to_remove = []
for line_i, line in enumerate(callsigns):
for b in bad:
if b in line:
# We're removing this line, take note of it.
to_remove.append(line_i)
break
# Remove the lines in a second step. Reverse it so the indices don't change.
for r in reversed(to_remove):
del callsigns[r]
return callsigns
for x in range (0, 11):
lines = clean(lines, bad_letters)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Save the names you want to keep in a separate list.. Maybe this way:-
```
with open(fname) as f:
lines = f.read().splitlines()
bad_letters = ['B', 'C', 'F', 'G', 'H', 'J', 'L', 'O', 'P', 'Q', 'U', 'W', 'X']
def clean(callsigns, bad):
valid = [i for i in callsigns if not any(j in i for j in bad)]
return valid
valid_names = clean(lines,bad_letters)
print (len(valid_names))
```
Upvotes: 1
|
2018/03/16
| 2,485 | 9,798 |
<issue_start>username_0: I was researching on hosting a static website on Amazon S3, but needs to make it accessible only for certain group of people who knows credentials.
Couldn't find any option other than IP restrcition below. I can't do this as I don't know IP for every person who would access this.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::testfoo/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "172.16.31.10/24"
},
"NotIpAddress": {
"aws:SourceIp": "192.168.3.11/32"
}
}
}
]
}
```
I am looking for an option like we do it in apache with a .htaccess where basic auth username and password will be requested before granting access.<issue_comment>username_1: I'm editing this answer to expand the answer on how to implement a basic auth for your static website hosted on an S3 bucket using cloudfront and lambda@edge. I used this [article](https://hackernoon.com/serverless-password-protecting-a-static-website-in-an-aws-s3-bucket-bfaaa01b8666) for reference on using the lambda@Edge as a way to implement serverless basic auth for your s3 bucket.
>
> In a nutshell, Lambda@Edge allows you to attach AWS Lambda functions to CloudFront behaviors. CloudFront is Amazon’s CDN solution and can sit in-front of a S3 bucket, providing low latency responses, high transfer speeds, support for custom domains with free SSL certificates from Amazon and it integrates with other AWS services, now including Lambda.
>
>
>
---
S3 Bucket Setup
---------------
I won't go in the details on how to create an S3 bucket, but basically after you create an S3 bucket (e.g. s3-bucket-secure-example), upload your files and enabling static website hosting in the properties. No need to enable any permissions at this point.And if you try to access your s3 bucket using the URL, you should get access denied.
---
Setting up the Cloudfront
-------------------------
note:this is minimum basic setup to use the cloudfront for your s3 bucket
Referencing this [walkthrough](https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-cloudfront-walkthrough.html), create a cloudfront distribution from [here](https://console.aws.amazon.com/cloudfront/home?#).
1. Create Distribution
2. Click on Get started for web
3. Under Origin Domain Name, put your S3 bucket static website endpoint.
4. Under Origin Settings, select "Yes" for Restrict Bucket Access. This will prevent users bypassing the cloudfront distribution and going directly to the S3 bucket url. You can then create a new identity under 'Origin Access Identity', or use an exisiting one, and let it update bucket policy to grant read permissions for this cloudfront distribution, or you can manually update them.
[](https://i.stack.imgur.com/TnZqK.png)
5. You can leave the remaining settings for 'Default Cache Behavior Settings' as default for now, or adjust them based on what you want.
6. Under Distribution settings, you can select the price class you want, and under Default Root Object, you can put the object you want to return from your bucket (e.g. index.html).
7. Create distribution. It will take a while for this process to complete. You can go the Cloudfront Distributions home and go to the domain name it gave you (e.g. d3xxxxxxxx.cloudfront.net) to see your s3 bucket static website.
---
Setting up a [lambda@Edge](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-how-it-works.html) function
-------------------------------------------------------------------------------------------------------------------------------------
Note: This trigger (cloudfront) used by this function is only available for US East (N. Virginia) Region (us-east-1) at the moment of writing.
1. In your aws console home, go to Lambda under Compute.
2. Click on create a function.
3. Fill in the details: Name your function any name you want, and select 'Create new role from template'. For role name, you can put any name as well, and you can select basic edge lambda permissions.
[](https://i.stack.imgur.com/cFk2h.png)
4. Click on create function.
5. Inside the function, and under the Function Code section, add your authentication code (the referenced [article](https://hackernoon.com/serverless-password-protecting-a-static-website-in-an-aws-s3-bucket-bfaaa01b8666) above has a sample function to use). Then click on Save. Then under actions, publish a new version. Give it a description and click on publish.
6. Copy the ARN from the top corner and go back to your cloudfront distribution. Click on the id, then go to the behaviours tab. Select your behaviour and click on edit. Then at the bottom of the page you will find Lambda function associations.
7. Select "Viewer request" for event type, and paste in your lambda function ARN.
[](https://i.stack.imgur.com/L3vlC.png)
Then click, "Yes, Edit". Give it some time to update.
You can also check the cloudfront distribution has been associated with your function by going back to the function and see its associated now.
[](https://i.stack.imgur.com/AAGF9.png)
Upvotes: 3 [selected_answer]<issue_comment>username_2: Amazon S3 is not a web server. It cannot process the contents of a page.
To selectively control access, you would need some form compute logic that can determine whether access should be permitted. This is not available as part of S3.
The closest equivalent would be using [Amazon CloudFront cookies](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Cookies.html) to grant access, but you would still need a way of initially serving the cookies.
A more capable solution could be created that calls API Gateway and Lambda but that's getting a bit too complex for a simple static website.
Upvotes: 0 <issue_comment>username_3: By now, this has become relatively straightforward on AWS. It's basically 3 things that need to be done:
1. Create a CloudFront function to add Basic Auth into the request.
2. Configure the Origin of the CloudFront distribution correctly in a few places.
3. Activate the CloudFront function.
That's it, no particular bells & whistles otherwise. Here's what I've done:
First, go to CloudFront, then click on Functions on the left, create a new function with a name of your choice (no region etc. necessary) and then add the following as the code of the function:
```
function handler(event) {
var user = "myuser";
var pass = "<PASSWORD>";
function encodeToBase64(str) {
var chars =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
for (
// initialize result and counter
var block, charCode, idx = 0, map = chars, output = "";
// if the next str index does not exist:
// change the mapping table to "="
// check if d has no fractional digits
str.charAt(idx | 0) || ((map = "="), idx % 1);
// "8 - idx % 1 * 8" generates the sequence 2, 4, 6, 8
output += map.charAt(63 & (block >> (8 - (idx % 1) * 8)))
) {
charCode = str.charCodeAt((idx += 3 / 4));
if (charCode > 0xff) {
throw new InvalidCharacterError("'btoa' failed: The string to be encoded contains characters outside of the Latin1 range."
);
}
block = (block << 8) | charCode;
}
return output;
}
var requiredBasicAuth = "Basic " + encodeToBase64(`${user}:${pass}`);
var match = false;
if (event.request.headers.authorization) {
if (event.request.headers.authorization.value === requiredBasicAuth) {
match = true;
}
}
if (!match) {
return {
statusCode: 401,
statusDescription: "Unauthorized",
headers: {
"www-authenticate": { value: "Basic" },
},
};
}
return event.request;
}
```
Then you can test with directly on the UI and assuming it works and assuming you have customized username and password, publish the function.
Please note that I have found individual pieces of the function above on the Internet so this is not my own code (other than piecing it together). I wish I would still find the sources so I can quote them here but I can't find them anymore. Credits to the creators though! :-)
Next, open your CloudFront distribution and do the following:
1. Make sure your S3 bucket in the origin is configured as a REST endpoint and not a website endpoint, i.e. it must end on `.s3.amazonaws.com` and not have the word `website` in the hostname.
2. Also in the Origin settings, under "S3 bucket access", select "Yes use OAI (bucket can restrict access to only CloudFront)". In the setting below click on "Create OAI" to create a new OAI (unless you have an existing one and know what you're doing). And select "Yes, update the bucket policy" to allow AWS to add the necessary permissions to your OAI.
3. Finally, open your Behavior of the CloudFront distribution and scroll to the bottom. Under "Function associations", for "Viewer request" select "CloudFront Function" and select your newly created CloudFront function. Save your changes.
And that should be it. With a bit of luck a matter of a couple of minutes (realistically more, I know) and especially not additional complexity once this is all set up.
Upvotes: 0
|
2018/03/16
| 1,509 | 5,173 |
<issue_start>username_0: I am applying filters to objects (following [image filters demo](http://fabricjs.com/image-filters)) and everything is ok but after I save and load the canvas, the image filters change index.
At the moment I have four filters and they are applied by index (as shown in the demo).
>
> 0 : Grayscale
>
>
> 1 : Invert
>
>
> 2 : Remove Color
>
>
> 3 :- Blend Color
>
>
>
So if I apply Grayscale, and Remove Color, the 'filters' array looks like this , with indexes 0 and 2 which is correct...
[](https://i.stack.imgur.com/ivb9n.png)
But after I load the canvas (using loadFromJSON), the object's 'filters' array looks like this, with the indexes reset...
[](https://i.stack.imgur.com/xTlvI.png)
Is there any way that I can load the object and retain the filter indexes? There is code that is dependant on this and it is causing errors when I load a canvas that has objects with filters.
I have tried applying the following upon creation of the object...
```
oImg.filters = [
false,
false,
false,
false
];;
```
It works ok when the object is created...
[](https://i.stack.imgur.com/70PWR.png)
But when it is loaded, the false indexes are removed and its the same result...
[](https://i.stack.imgur.com/xTlvI.png)<issue_comment>username_1: I managed to get this working by changing the way I applied and retrieved the filters (by type rather than index). I simply checked to see if the filter existed (by 'type' not index), then spliced the filter at the desired index.
Changed this function...
```
getFilter(index) {
var obj = canvas.getActiveObject();
return obj.filters[index];
}
```
to this...
```
getFilter(type) {
var obj = canvas.getActiveObject();
if (obj) {
filter = null;
obj.filters.forEach(function(f) {
if (f.type == type) {
filter = f;
}
});
return filter;
}
}
```
---
Changed this function...
```
applyFilter(index, filter) {
var object = canvas.getActiveObject();
object.filters[index] = filter;
object.applyFilters();
canvas.renderAll();
}
```
to this...
```
applyFilter(type, filterIndex, filter) {
var obj = canvas.getActiveObject();
var indexExists = false;
var filterFound = false;
if (obj) {
obj.filters.forEach(function(f, i) {
if (f.type == type) {
filterFound = true;
obj.filters[i] = filter;
}
if (filterIndex == i) {
indexExists = true;
}
});
if (!filterFound && indexExists) {
obj.filters.splice(filterIndex, 0, filter);
} else if (!filterFound && !indexExists) {
obj.filters[filterIndex] = filter;
}
}
obj.applyFilters();
canvas.renderAll();
}
```
---
Changed this function...
```
applyFilterValue(index, prop, value) {
var obj = canvas.getActiveObject();
if (obj.filters[index]) {
obj.filters[index][prop] = value;
obj.applyFilters();
canvas.renderAll();
}
}
```
to this...
```
applyFilterValue(type, filterIndex, prop, value) {
var obj = canvas.getActiveObject();
if (obj) {
obj.filters.forEach(function(f, i) {
if (f.type == type) {
obj.filters[i][prop] = value;
}
});
}
obj.applyFilters();
canvas.renderAll();
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: In my current project i need to have the ability to customize the order of the filters. The problem with lost indexing after `canvas.toDatalessJSON()` gave me the idea to make a simple solution for that.
I simply build a new array of filternames every time og load the canvas.
```
const filterNamesDefault = ['Contrast', 'Saturation', 'Sharpen', 'Tint', 'invert', 'grayscale', 'Sepia'];
var filterNames = [];
var canvasLoaded = false;
```
`canvasLoaded` is a boolean that has to be set to true when a canvas has been loaded.
Then in the `applyFilterType` and `applyFilterValue` functions i provide the functions with the filter type instead of the index:
```
function filterIndex(filterType) {
if (canvasLoaded) { return filterNames.indexOf(filterType); } else { return filterNamesDefault.indexOf(filterType); }
}
function applyFilterType(type, filter) {
var index=filterIndex(type);
...
}
function applyFilterValue(type, prop, value) {
var index=filterIndex(type);
...
}
```
Let's say i add some filters `['Sharpen','Tint']` that are stored in the `canvas.filters` object as usual, and after canvas save/load are repositioned.
Then when i want to display the filter edit box for a image layer i execute this:
```
var obj = canvas.getActiveObject();
filterNames = [];
obj.filters.forEach(function(f,i){
filterNames.push(f.type);
});
for (var i=0;i
```
Now the new order for `filterNames` is:
```
['Sharpen', 'Tint', 'Contrast', 'Saturation', 'invert', 'grayscale', 'Sepia']
```
And it works as well with the new order of the filters.
Upvotes: 1
|
2018/03/16
| 551 | 1,857 |
<issue_start>username_0: On the time of page loaded get\_switch() function which globally created on app.js page will be call then return a method. i want to execute these return methods.
demo.js
```
const return_functions = get_switch('BTC');
function get_btc()
{
console.log('btc');
}
function get_bch()
{
console.log('bch');
}
```
app.js
```
global.get_switch=function(coin_name){
switch(coin_name){
case 'BTC':
return 'get_btc()';
break;
case 'BCH':
return 'get_bth()';
break;
default:
console.log('default');
}
}
```
As shown in example above i have passed BTC in get\_switch. and that function return us get\_btc() function. so i want to call get\_btc function on same time.
If this is not possible in this way so please guide me with your idea and suggest me how can i do this.<issue_comment>username_1: demo.js
```
var obj = {
get_btc: function get_btc() {
console.log('btc');
},
get_bth: function get_bth() {
console.log('get_bth');
}
}
const return_functions = get_switch('BTC');
if (return_functions) {
obj[return_functions]();
}
```
app.js
```
global.get_switch = function (coin_name) {
switch (coin_name) {
case 'BTC':
return 'get_btc';
break;
case 'BCH':
return 'get_bth';
break;
default:
console.log('default');
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: ```
var get_switch=function(coin_name){
switch(coin_name){
case 'BTC':
get_btc();
break;
case 'BCH':
get_bth();
break;
default:
console.log('default');
}
}
get_switch('BTC');
function get_btc() {
console.log('btc');
}
function get_bch() {
console.log('bch');
}
```
Upvotes: 0
|
2018/03/16
| 1,573 | 4,602 |
<issue_start>username_0: Heroku is suddenly rejecting deployments of my Python app with the error:
```
remote: ImportError: cannot import name 'InsecureRequestWarning'
remote: ! Push rejected, failed to compile Python app.
```
This is occurring when deploying completely non-functional changes to what's already running fine on Heroku.
```
✗ git checkout -b testing heroku/master
Branch testing set up to track remote branch master from heroku.
Switched to a new branch 'testing'
➜ fwdform2 git:(testing) ✗ echo "Garbage" > README.md
➜ fwdform2 git:(testing) ✗ git add README.md
➜ fwdform2 git:(testing) ✗ git commit -m "Non-functional changes"
[testing 7dd95cb] Non-functional changes
1 file changed, 1 insertion(+), 275 deletions(-)
rewrite README.md (100%)
➜ fwdform2 git:(testing) ✗ git push heroku testing:master
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 288 bytes | 288.00 KiB/s, done.
Total 3 (delta 1), reused 0 (delta 0)
remote: Compressing source files... done.
remote: Building source:
remote:
remote: -----> Python app detected
remote: -----> Installing dependencies with latest Pipenv…
remote: Traceback (most recent call last):
remote: File "/app/.heroku/python/bin/pipenv", line 11, in
remote: sys.exit(cli())
remote: File "/tmp/build\_ffaa2c482e67c8c4daeb9b3348ef030d/.heroku/python/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 722, in \_\_call\_\_
remote: return self.main(\*args, \*\*kwargs)
remote: File "/tmp/build\_ffaa2c482e67c8c4daeb9b3348ef030d/.heroku/python/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 697, in main
remote: rv = self.invoke(ctx)
remote: File "/tmp/build\_ffaa2c482e67c8c4daeb9b3348ef030d/.heroku/python/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 1063, in invoke
remote: Command.invoke(self, ctx)
remote: File "/tmp/build\_ffaa2c482e67c8c4daeb9b3348ef030d/.heroku/python/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 895, in invoke
remote: return ctx.invoke(self.callback, \*\*ctx.params)
remote: File "/tmp/build\_ffaa2c482e67c8c4daeb9b3348ef030d/.heroku/python/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 535, in invoke
remote: return callback(\*args, \*\*kwargs)
remote: File "/tmp/build\_ffaa2c482e67c8c4daeb9b3348ef030d/.heroku/python/lib/python3.6/site-packages/pipenv/vendor/click/decorators.py", line 17, in new\_func
remote: return f(get\_current\_context(), \*args, \*\*kwargs)
remote: File "/app/.heroku/python/lib/python3.6/site-packages/pipenv/cli.py", line 62, in cli
remote: from . import core
remote: File "/app/.heroku/python/lib/python3.6/site-packages/pipenv/core.py", line 30, in
remote: from requests.packages.urllib3.exceptions import InsecureRequestWarning
remote: ImportError: cannot import name 'InsecureRequestWarning'
remote: ! Push rejected, failed to compile Python app.
remote:
remote: ! Push failed
remote: Verifying deploy...
remote:
remote: ! Push rejected to fwdform2.
remote:
To https://git.heroku.com/fwdform2.git
! [remote rejected] testing -> master (pre-receive hook declined)
error: failed to push some refs to 'https://git.heroku.com/fwdform2.git'
```
It looks like Heroku's Python buildpack, specifically pipenv, is broken. Any ideas?
**Python version (runtime.txt):**
3.6.4<issue_comment>username_1: Can’t you please add **python —version** to this workflow and tell us what that is? I suspect that it is Python < 2.7.9.
See [Suppress InsecureRequestWarning: Unverified HTTPS request is being made in Python2.6](https://stackoverflow.com/questions/27981545/suppress-insecurerequestwarning-unverified-https-request-is-being-made-in-pytho)
Upvotes: 0 <issue_comment>username_2: There was a bug with [Pipenv and Heroku's buildpack](https://github.com/heroku/heroku-buildpack-python/issues/661), which has since been rectified.
If anyone runs into a similar issue in the future, you can fork [Heroku's buildpack](https://github.com/heroku/heroku-buildpack-python) and alter the pinned Pipenv version, or make any other necessary changes.
Push your fork, then setup your application to use your forked buildpack e.g.
```
heroku buildpacks:set https://github.com/Benjamin-Dobell/heroku-buildpack-python.git\#7e0da719744f0f5185a624d38877effa142d6639
```
<NAME> of Salesforce/Heroku also suggested the use of the following commands to purge any Pipenv/buildpack related issues:
```
heroku plugins:install heroku-repo
heroku repo:purge_cache
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 406 | 1,197 |
<issue_start>username_0: So many threads already talk about returning the longest list in a list of lists by
```
max(sorted(lst_of_lsts, key = len))
```
But this spews if lst\_of\_lsts is an empty list.
>
>
> ```
> max(sorted(lst_of_lsts, key = len)) ValueError: max() arg is an empty sequence
>
> ```
>
>
Is there a version with support for empty lists? It would return an empty list.
Thanks<issue_comment>username_1: You can add the list by `[[]]`:
```
max(lst_of_lsts + [[]], key = len)
```
Upvotes: 2 <issue_comment>username_2: Do you want something like this?
```
lst_of_lsts = [[], []]
lst = sorted(lst_of_lsts, key = len)[0]
if lst:
print(max(lst))
else:
print() #whatever
```
Upvotes: 1 <issue_comment>username_3: First of all, your original is incorrect. `max` does not require a sorted argument, and even if you do sort it, you still have to pass `key` to `max`. Here is an example of what happens if you don't: <https://ideone.com/TMJyGK>.
The easiest thing I can think of is to just check if your list is empty:
```
max(lst_of_lst if lst_of_lst else [[]], key=len)
```
OR
```
max(lst_of_lst, key=len) if lst_of_lst else []
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 682 | 2,705 |
<issue_start>username_0: I'm using Application insights with API-Management to monitor my API's. Application Insights is great tool but I'm not able to see body.
I want to see Post request body parameter. Is there any way to add body data on application insights??<issue_comment>username_1: >
> I can't do code changes in existing api. any option without code change would help me
>
>
>
Unfortunately, it is not supported by Application Insights.
I also find the [feedback](https://feedback.azure.com/forums/357324-application-insights/suggestions/13949925-add-support-for-viewing-raw-body-requests), you could vote it.
It now supports [custom Telemetry Initializer](https://stackoverflow.com/questions/42686363/view-post-request-body-in-application-insights) as I have shown to you.
Upvotes: 3 [selected_answer]<issue_comment>username_2: This is not supported at the moment. APIM is not exposing Telemetry Initializers to the customer instead it will be providing custom options added to the diagnostic entity that will allow you to control sampling, verbosity and ability to log headers. Body is still being debated. Will user want to see response body returned from backend or sent to client? Body can be modified at different stages. Alternatively, you can use Log-To-EventHub and have the ability to place it at specific points of pipeline. Another idea being considered is Log-To-ApplicationInsights.
Note: Adding for more data to Application Insights takes hit on APIM perf.
Upvotes: 0 <issue_comment>username_3: The simplest way (ok, the only way I've gotten it to work) would be to log the body yourself using the AppInsights SDK.
According to [GitHub](https://github.com/microsoft/ApplicationInsights-aspnetcore/issues/940#issuecomment-513297006), the more recent AppInsights SDKs have initializers running *after* processing when the stream is closed.
Upvotes: 0 <issue_comment>username_4: You have to configure API Management to log the request payload to Application Insights. See here: [Enable Application Insights logging for your API](https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-app-insights#enable-application-insights-logging-for-your-api)
1. Navigate to your Azure API Management service instance in the Azure portal.
2. Select APIs from the menu on the left.
3. Click on your API.
4. Go to the Settings tab from the top bar.
5. Scroll down to the Diagnostics Logs section.
6. Check the Enable box.
7. Select your attached logger in the Destination dropdown.
8. Input 100 as Sampling (%) and tick the Always log errors checkbox.
9. Under Additional settings, configure up to 8192 bytes of the payload to be logged.
10. Click Save.
Upvotes: 3
|
2018/03/16
| 3,509 | 11,185 |
<issue_start>username_0: I have 2 servers, one which has magento 2 installed (ip - 172.16.31.10 port 80) and Varnish on another (ip - 192.168.127.12 port 80)
My Varnish Configuration:
File /etc/default/varnish:-
```
DAEMON_OPTS="-a :80 \
-T 127.0.0.1:6082 \
-b 172.16.31.10:80 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
```
netstat -tulpn :-
```
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1288/sshd
tcp 0 0 127.0.0.1:6082 0.0.0.0:* LISTEN 11115/varnishd
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 11115/varnishd
tcp6 0 0 :::22 :::* LISTEN 1288/sshd
tcp6 0 0 :::80 :::* LISTEN 11115/varnishd
```
/etc/varnish/default.vcl : -
```
# VCL version 5.0 is not supported so it should be 4.0 even though actually used Varnish version is 5
vcl 4.0;
import std;
# The minimal Varnish version is 5.0
# For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'
backend default {
.host = "172.16.31.10";
.port = "80";
.first_byte_timeout = 600s;
.probe = {
.url = "/pub/health_check.php";
.timeout = 2s;
.interval = 5s;
.window = 10;
.threshold = 5;
}
}
acl purge {
"192.168.127.12";
"127.0.0.1";
"localhost";
}
sub vcl_recv {
if (req.method == "PURGE") {
if (client.ip !~ purge) {
return (synth(405, "Method not allowed"));
}
# To use the X-Pool header for purging varnish during automated deployments, make sure the X-Pool header
# has been added to the response in your backend server config. This is used, for example, by the
# capistrano-magento2 gem for purging old content from varnish during it's deploy routine.
if (!req.http.X-Magento-Tags-Pattern && !req.http.X-Pool) {
return (synth(400, "X-Magento-Tags-Pattern or X-Pool header required"));
}
if (req.http.X-Magento-Tags-Pattern) {
ban("obj.http.X-Magento-Tags ~ " + req.http.X-Magento-Tags-Pattern);
}
if (req.http.X-Pool) {
ban("obj.http.X-Pool ~ " + req.http.X-Pool);
}
return (synth(200, "Purged"));
}
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
# We only deal with GET and HEAD by default
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Bypass shopping cart, checkout and search requests
if (req.url ~ "/checkout" || req.url ~ "/catalogsearch") {
return (pass);
}
# Bypass health check requests
if (req.url ~ "/pub/health_check.php") {
return (pass);
}
# Set initial grace period usage status
set req.http.grace = "none";
# normalize url in case of leading HTTP scheme and domain
set req.url = regsub(req.url, "^http[s]?://", "");
# collect all cookies
std.collect(req.http.Cookie);
# Compression filter. See https://www.varnish-cache.org/trac/wiki/FAQ/Compression
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") {
# No point in compressing these
unset req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") {
set req.http.Accept-Encoding = "deflate";
} else {
# unkown algorithm
unset req.http.Accept-Encoding;
}
}
# Remove Google gclid parameters to minimize the cache objects
set req.url = regsuball(req.url,"\?gclid=[^&]+$",""); # strips when QS = "?gclid=AAA"
set req.url = regsuball(req.url,"\?gclid=[^&]+&","?"); # strips when QS = "?gclid=AAA&foo=bar"
set req.url = regsuball(req.url,"&gclid=[^&]+",""); # strips when QS = "?foo=bar&gclid=AAA" or QS = "?foo=bar&gclid=AAA&bar=baz"
# Static files caching
if (req.url ~ "^/(pub/)?(media|static)/") {
# Static files should not be cached by default
return (pass);
# But if you use a few locales and don't use CDN you can enable caching static files by commenting previous line (#return (pass);) and uncommenting next 3 lines
#unset req.http.Https;
#unset req.http.X-Forwarded-Proto;
#unset req.http.Cookie;
}
return (hash);
}
sub vcl_hash {
if (req.http.cookie ~ "X-Magento-Vary=") {
hash_data(regsub(req.http.cookie, "^.*?X-Magento-Vary=([^;]+);*.*$", "\1"));
}
# For multi site configurations to not cache each other's content
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
# To make sure http users don't see ssl warning
if (req.http.X-Forwarded-Proto) {
hash_data(req.http.X-Forwarded-Proto);
}
}
sub vcl_backend_response {
set beresp.grace = 3d;
if (beresp.http.content-type ~ "text") {
set beresp.do_esi = true;
}
if (bereq.url ~ "\.js$" || beresp.http.content-type ~ "text") {
set beresp.do_gzip = true;
}
if (beresp.http.X-Magento-Debug) {
set beresp.http.X-Magento-Cache-Control = beresp.http.Cache-Control;
}
# cache only successfully responses and 404s
if (beresp.status != 200 && beresp.status != 404) {
set beresp.ttl = 0s;
set beresp.uncacheable = true;
return (deliver);
} elsif (beresp.http.Cache-Control ~ "private") {
set beresp.uncacheable = true;
set beresp.ttl = 86400s;
return (deliver);
}
# validate if we need to cache it and prevent from setting cookie
# images, css and js are cacheable by default so we have to remove cookie also
if (beresp.ttl > 0s && (bereq.method == "GET" || bereq.method == "HEAD")) {
unset beresp.http.set-cookie;
}
# If page is not cacheable then bypass varnish for 2 minutes as Hit-For-Pass
if (beresp.ttl <= 0s ||
beresp.http.Surrogate-control ~ "no-store" ||
(!beresp.http.Surrogate-Control &&
beresp.http.Cache-Control ~ "no-cache|no-store") ||
beresp.http.Vary == "*") {
# Mark as Hit-For-Pass for the next 2 minutes
set beresp.ttl = 120s;
set beresp.uncacheable = true;
}
return (deliver);
}
sub vcl_deliver {
if (resp.http.X-Magento-Debug) {
if (resp.http.x-varnish ~ " ") {
set resp.http.X-Magento-Cache-Debug = "HIT";
set resp.http.Grace = req.http.grace;
} else {
set resp.http.X-Magento-Cache-Debug = "MISS";
}
} else {
unset resp.http.Age;
}
# Not letting browser to cache non-static files.
if (resp.http.Cache-Control !~ "private" && req.url !~ "^/(pub/)?(media|static)/") {
set resp.http.Pragma = "no-cache";
set resp.http.Expires = "-1";
set resp.http.Cache-Control = "no-store, no-cache, must-revalidate, max-age=0";
}
unset resp.http.X-Magento-Debug;
unset resp.http.X-Magento-Tags;
unset resp.http.X-Powered-By;
unset resp.http.Server;
unset resp.http.X-Varnish;
unset resp.http.Via;
unset resp.http.Link;
}
sub vcl_hit {
if (obj.ttl >= 0s) {
# Hit within TTL period
return (deliver);
}
if (std.healthy(req.backend_hint)) {
if (obj.ttl + 300s > 0s) {
# Hit after TTL expiration, but within grace period
set req.http.grace = "normal (healthy server)";
return (deliver);
} else {
# Hit after TTL and grace expiration
return (miss);
}
} else {
# server is not healthy, retrieve from cache
set req.http.grace = "unlimited (unhealthy server)";
return (deliver);
}
}
```
Now the issue is, when I open the URL `172.16.31.10` , magento opens but it's not getting cached in varnish. But when I call varnish URL `192.168.127.12`, it will redirect to my magento url `172.16.31.10`. In my varnish log, it shows that the page is cached already but magento is not getting served from that varnish cache.<issue_comment>username_1: this is the expected behaviour.
Varnish sits in front of you Magento server and if you ping your Magento server bypassing Varnish, opening 172.16.31.10, then Magento will send you a response without involving Varnish, while if you ping varnish, Varnish will then call Magento and cache. The other way around is not possible and makes no sense.
Upvotes: 0 <issue_comment>username_2: In order for the cache to work, your requests always have to go through the proxy/cache (`Varnish`).
`Varnish` evaluates your requests and decides if the request is already cached (then returns it from the cache) or not - then it will redirect it to `Magento` and cache the response, before returning it to the client.
That's how most caches work. You can read in detail how `Varnish` cache mechanism works [here](http://devdocs.magento.com/guides/v2.0/config-guide/varnish/use-varnish-cache-how.html).
If you want to hit your cache, you always have to go through `Varnish` `(192.168.127.12:80)`.
Consider this diagram form the official [documentation](http://devdocs.magento.com/guides/v2.0/config-guide/varnish/config-varnish.html):
[](https://i.stack.imgur.com/tZ5z0l.png)
Upvotes: 2 <issue_comment>username_3: ```
This solution worked for me. To configure Varnish and Magento on a different server.
Varnish server: xxx.xxx.xxx.xxx port 80
Magento server: yyy.yyy.yyy.yyy port 80
Changes need to be made on the varnish server:
1. login to varnish server
2. go to file /etc/varnish/default.vcl
3. under the "backend default" update
.host = "yyy.yyy.yyy.yyy";//(use Magento server IP for better network)
.port = "80";//(Magento web server port)
4. Restart the Varnish (systemctl restart varnish)
Note: Kindly use the default VCL that is generated during the varnish installation and don't update it with Magento generated VCL for varnish ( available from Magento Admin)
Changes need to be made on the Magento server:
1. Log in to the Magento server
2. Go to the env.php file located in the app/etc directory
3. Update the values in 'http_cache_hosts' => [
[
'host' => 'xxx.xxx.xxx.xxx', //(varnish server public ip)
'port' => '80' // ( varnish server port)
]
]
Now update your base URL's on the core_config_data table to your varnish public Ip (http://xxx.xxx.xxx.xxx/)
flush the Magento caches ( bin/magento ca:fl)
```
Upvotes: 0
|
2018/03/16
| 377 | 1,306 |
<issue_start>username_0: I am doing a course on data structure and algorithm. There is this code written here where there is a function `mf()`. It returns `int old`.My question is that how can it return `old` since `int old` is a local variable which will be destroyed at the time of return.
```
class x{
public;
int m;
int mf(int v){int old = m; m = v; return old; }
};
```<issue_comment>username_1: The function returns the value of `old`, not the variable itself. It is returning a copy of the value.
Upvotes: 3 [selected_answer]<issue_comment>username_2: C/C++ will pass the value of the return variable to a temporary value:
```
class x{
public:
int m;
int mf(int v){
int old = m; // store member var m in old.
m = v; // change m to v.
return old; // return the old value of m.
// **tmp = old;**
}
};
x foo;
foo.m = 5;
int n = foo.mf(3); // n = tmp; so now tmp is 5, and it's assigned to n;
```
You can observe the copy from `old` to `tmp` and destory of `old` and `tmp` when you return a class and print a message in the constructor and destructor.
FYI, the compiler can use [return value optimization (RVO)](https://en.wikipedia.org/wiki/Copy_elision#Return_value_optimization) to store the return value directly in `n`.
Upvotes: 0
|
2018/03/16
| 737 | 2,388 |
<issue_start>username_0: When convert the properties to JSON it added extra backslash in ASCII character, How to avoid this, see the code below
Input File (sample.properties)
```
property.key.CHOOSE=\<KEY>
```
Code
```
import json
def convertPropertiesToJson(fileName, outputFileName, sep='=', comment_char='#'):
props = {}
with open(fileName, "r") as f:
for line in f:
l = line.strip()
if l and not l.startswith(comment_char):
innerProps = {}
keyValueList = l.split(sep)
key = keyValueList[0].strip()
keyList = key.split('.')
value = sep.join(keyValueList[1:]).strip()
if keyList[1] not in props:
props[keyList[1]] = {}
innerProps[keyList[2]] = value
props[keyList[1]].update(innerProps)
with open(outputFileName, 'w') as outfile:
json.dump(props, outfile)
convertPropertiesToJson("sample.properties", "sample.json")
```
Output: (sample.json)
```
{"key": {"CHOOSE": <KEY>"}}
```
Expected Result:
```
{"key": {"CHOOSE": "\<KEY>"}}
```<issue_comment>username_1: The problem seems to be that you have saved unicode characters which are represented as escaped strings. You should decode them at some point.
Changing
```
l = line.strip()
```
to (for Python 2.x)
```
l = line.strip().decode('unicode-escape')
```
to (for Python 3.x)
```
l = line.strip().encode('ascii').decode('unicode-escape')
```
gives the desired output:
```
{"key": {"CHOOSE": "\<KEY>"}}
```
Upvotes: 0 <issue_comment>username_2: I don't know solution to your problem but I found out where problem occurs.
```
with open('sample.properties', encoding='utf-8') as f:
for line in f:
print(line)
print(repr(line))
d = {}
d['line'] = line
print(d)
out:
property.key.CHOOSE=\u9078\u629e
'property.key.CHOOSE=\\u9078\\u629e'
{'line': 'property.key.CHOOSE=\\u9078\\u629e'}
```
I don't know how adding to dictionary adds repr of string.
Upvotes: 0 <issue_comment>username_3: The problem is the input is read as-is, and `\u` is copied literally as two characters. The easiest fix is probably this:
```
with open(fileName, "r", encoding='unicode-escape') as f:
```
This is will decode the escaped unicode characters.
Upvotes: 2
|
2018/03/16
| 680 | 2,130 |
<issue_start>username_0: [](https://i.stack.imgur.com/XXm3W.png)
In the above screenshot console I have an object with 2 values **users** and **tickers**. Again each one is array of values.
Now how to display these values in angular5 html template as specified in above screenshot?
I am trying to use ngFor but it is showing errors.<issue_comment>username_1: 1. You can use json pipe for debug purpose like this:
```
{{object |json}}
```
2. If you want exactly as in picture, look at this [solution](https://stackoverflow.com/a/49276529/5677886). With this case, you don't need to manually write first level property names of object in template for using **\*ngFor**
Upvotes: 0 <issue_comment>username_2: you can use like that,
```
users
* {{i}}. {{item.frist\_name}}
Tickets
* {{i}}. {{item.name}}
```
Upvotes: 0 <issue_comment>username_3: According to the [documentation](https://angular.io/api/common/NgForOf) it should be this:
```
Users:
1. {{user.first\_name}}
Tickets:
1. {{ticket.name}}
```
Upvotes: 0 <issue_comment>username_4: Suppose this is your data:
```
public data = {
tickers:[
{id:"1",name:"Ticker Name 1", company:"Company 1"},
{id:"2",name:"Ticker Name 2", company:"comapny 2"}
],
users:[
{id:"1",first_name:"User1 ", last_name:"u1last", email:"<EMAIL>"},
{id:"2",first_name:"User2", last_name:"u2last", email:"<EMAIL>"},
{id:"3",first_name:"User3", last_name:"u3last", email:"<EMAIL>"},
{id:"4",first_name:"User4", last_name:"u4last", email:"<EMAIL>"}
]
};
public dataKeys; // This will hold the main Object Keys.
```
The constructor will look something like this:
```
constructor() {
this.dataKeys = Object.keys(this.data);
}
```
Here is the simple HTML that you need to write:
```
### {{ key }}
* {{d.name || d.first\_name}}
```
Here is the complete working plunker for your case:
[Click here to view the Working Solution](https://embed.plnkr.co/QVATF7NiNKmdKKiG0DNm/)
Upvotes: 2 [selected_answer]
|
2018/03/16
| 873 | 2,838 |
<issue_start>username_0: I`m looking for a way to sort of zoom in/zoom out on my Phaser app depending on the screen size while keeping the ratio (and not altering the canvas size pixelwise as shown on the sketch), I tried so many snippets but everybody is sort of looking for something else, this is what I'm looking for(also, the code below where the screen gets "full screened" works on desktop but not on mobile):
[](https://i.stack.imgur.com/WnG0U.png)
```js
var game = new Phaser.Game(1100,600, Phaser.Canvas,"gameDiv");
var mainState = {
init: function () {
game.renderer.renderSession.roundPixels = true;
game.physics.startSystem(Phaser.Physics.ARCADE);
game.physics.arcade.gravity.y = 800;
game.physics.arcade.gravity.x = -10850;
},
preload: function(){
//Loads background imagery
game.load.image(`background`, "assets/background_1877_600.png");
},
create: function(){
// function to scale up the game to full screen
// function to scale up the game to full screen
game.scale.pageAlignHorizontally = true;
game.scale.pageAlignVertically = true;
game.scale.fullScreenScaleMode = Phaser.ScaleManager.SHOW_ALL;
game.input.onDown.add(gofull, this);
function gofull() {
if (game.scale.isFullScreen)
{
//game.scale.stopFullScreen();
}
else
{
game.scale.startFullScreen(true);
}
}
background = game.add.tileSprite(0,0,1877,600,`background`);
},
update: function(){
background.tilePosition.x -= 0.25;
}
}
game.state.add(`mainState`,mainState);
game.state.start(`mainState`);
```
```html
Agame
html, body{
overflow: hidden;
}
```
If you have any idea on how I could achieve this, let me know<issue_comment>username_1: Can you do something like this at the start.
```
var x = 1100;
var y = 600;
// window.onresize = function(event) {
// }
var w = window.screen.width;
if(w > 900){
//ok
}
else if(w > 600){
x = 600;
y = 400;
}
else {
x = 300;
y = 100;
}
var game = new Phaser.Game(x,y, Phaser.Canvas,"gameDiv");
```
This will only work when you load a new device but you can add screen change event to resize while screen change its up to you.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I guess the best idea would be like:
```
html, body{
width: 100vw;
width: 100%; /*FallBack Width */
height: 100vh;
height: 100%; /*Same Here*/
}
.required-element{
font-size: 10vh; /*Or required */
}
/*And this will change according to the screen size
But, be sure add viewport to the html meta tag. */
```
Upvotes: 0
|
2018/03/16
| 654 | 2,225 |
<issue_start>username_0: Consider I'm a hotel owner, and I have to know the bookings related to my hotel.
I need all bookings that which associated with my hotel.
Is there any eloquent functions to join? How?
How to get bookings of a hotel with owner(userid) with using eloquent relationship functions?
<issue_comment>username_1: Define the relationship with user in model
```
class Booking extends Model{
/**
* Get the user that owns the booking.
*/
public function owner()
{
return $this->belongsTo(App\User::class,'user_id,'id');
}
}
```
Then in your controller
```
$bookings = Booking::with('owner')->all(); // or get() or whatever to get
```
the booking list
```
foreach($bookings as $booking){
echo $booking->user->name; // you have access to owner fields
}
```
Upvotes: 0 <issue_comment>username_2: **Hotel.php**
```
class Hotel extends Model
{
public function userId(){
return $this->belongsTo(User::class,'user_id');
}
public function rooms(){
return $this->hasMany(Room::class,'hotel_id','id');
}
}
```
**Room.php**
```
class Room extends Model
{
public function hotelId(){
return $this->belongsTo(Hotel::class,'hotel_id');
}
public function bookings(){
return $this->hasMany(Booking::class,'room_id','id');
}
}
```
**User.php**
```
class User extends Model
{
public function hotels(){
return $this->hasMany(Hotel::class,'user_id','id');
}
}
```
**Below code add in your respective controller.**
```
$user = \App\User::find(Auth::id());
$hotels = $user->hotels;
foreach ($hotels as $hotel) {
$rooms = $hotel->rooms;
foreach ($rooms as $room) {
print_r($room->bookings);
}
}
```
Upvotes: 1 <issue_comment>username_3: Finally i found this.
Booking model
```
public function room( )
{ return $this->belongsTo('App\room');
}
public function hotel( )
{
return $this->room()->with(['hotel'=> function($query){
$query->with('user');
}]);
}
```
BookingController
```
public function owner(Booking $booking ) {
return $booking->with('hotel')->get()
```
Next thing is Filter with owner
Upvotes: 0
|
2018/03/16
| 472 | 1,517 |
<issue_start>username_0: I have problem when I click on button the table row is not deleted **tr id** and **button id** are both same value. please tell where I am doing wrong
```
| | X |
```
*Jquery code*
```
$(document).ready(function() {
$('.button_remove').click(function() {
var btn_id = $(this).attr("id");
$('#' + btn_id + '').remove();
});
});
```
Thanks in advance<issue_comment>username_1: `id` need to be unique per element if you are going to use them in `jQuery`, and hense your code is not working (As `tr` and `td` sharing same value for `id`).
Simplify you code through [closest()](https://api.jquery.com/closest/)
```
$(document).ready(function(){
$('.button_remove').click(function(){
$(this).closest('tr').remove();
});
});
```
Note:- Remove `id` from `tr` and `td` both. As it's not needed at all. It will make your `HTML` structure correct and cleaner.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Thats because you are using same id for `tr` and `button`....Try to use `data-attributes` here
```
| | X ||
```
And then use jQuery like this
```
$(document).ready(function() {
$('.button_remove').click(function() {
var btn_id = $(this).data("row");
$('#' + btn_id + '').remove();
});
});
```
Upvotes: 0 <issue_comment>username_3: ```js
$(document).ready(function() {
$('.button_remove').click(function() {
$(this).parents().eq(1).remove();
});
});
```
```html
| | |
| --- | --- |
| | X |
| | X |
```
Upvotes: 0
|
2018/03/16
| 515 | 1,891 |
<issue_start>username_0: OK So for sake of simplicity lets say I have 3 components: a parent component and two other components (child A and child B).
I want parent component to **have an url prefix of '/parent' and contain one of the two other components**, component A by default, else B, who have their own url prefixes of **'childA' and 'childB'** respectively. Also I do not want parent component to to be viewable by itself, it must have either childA or childB viewable and **if '/parent' is called it will automatically re-route to '/parent/childA'.**
So far this is what I have in my router and it is not working I am getting an invalid path error on the console when I route to '/parent' and when I route to '/parent/child[AorB]' my browser lags forever and never routes:
```
{
path: 'parent', component: ParentComponent,
children : [
{
path: '',
redirectTo: 'childA'
},
{
path: 'childA',
component: ChildAComponent
}, {
path: 'childB',
component: childBComponent
}
]
}
```
parent's template is just a router-outlet like so:
```
```<issue_comment>username_1: Needed to add **pathMatch: 'full'** like so:
```
{
path: 'parent', component: ParentComponent,
children : [
{
path: '',
redirectTo: 'childA',
pathMatch: 'full' //<--------- Need this here
},
{
path: 'childA',
component: ChildAComponent
}, {
path: 'childB',
component: childBComponent
}
]
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Make sure you have 2 **router-outlet** defined in your html, so angular would know which is the parent and which is the child..
How is the code for your links?
For the children, user the './', like so:
```
[routerLink]="['./childA']"
```
For parent, you can go directly...
```
[routerLink]="['parent']"
```
Upvotes: 0
|
2018/03/16
| 303 | 1,123 |
<issue_start>username_0: i dont know about Xcode 8,9 can it support objective c old functions and project?
my project is almost complete but some kind of function not supporting like `UIDocumentBrowserViewController`and etc in iOS 9.0 should i upgrade only iOS version or Xcode also Please anyone help me understand that iOS stuff.
thank in advance<issue_comment>username_1: Needed to add **pathMatch: 'full'** like so:
```
{
path: 'parent', component: ParentComponent,
children : [
{
path: '',
redirectTo: 'childA',
pathMatch: 'full' //<--------- Need this here
},
{
path: 'childA',
component: ChildAComponent
}, {
path: 'childB',
component: childBComponent
}
]
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Make sure you have 2 **router-outlet** defined in your html, so angular would know which is the parent and which is the child..
How is the code for your links?
For the children, user the './', like so:
```
[routerLink]="['./childA']"
```
For parent, you can go directly...
```
[routerLink]="['parent']"
```
Upvotes: 0
|
2018/03/16
| 456 | 1,699 |
<issue_start>username_0: RPA-Blueprism - Global Send Keys - Input text which has '(' is not working. Only the text gets entered without brackets, please assist.
Ex: 'Paste (Text)' is the text I want to send, however the text is sent to the field as 'Paste Text'<issue_comment>username_1: You can use "Paste {(}Text{)}".
Upvotes: 1 <issue_comment>username_2: **Global Send Keys** internally uses **System.Windows.Forms.SendKeys** so characters like braces or parentheses must be escaped.
One way of doing it is using a **Calculation** stage with several **Replace** instructions, replacing the problematic characters with the escaped version.
For more information about **System.Windows.Forms.SendKeys** check the following MSDN link
<https://msdn.microsoft.com/en-us/library/system.windows.forms.sendkeys(v=vs.110).aspx>
Upvotes: 2 <issue_comment>username_3: RPA-Blueprism - Global Send Keys for '(' then
```
{9} - Key down Shift button and press 9 releases the shift key, indirectly it does SHIFT + 9
```
Let's consider - If needed to write "(" manual key stroke would be SHIFT + 9
So if you have to send Global send key for the same, then use the above code
You can refer the blow link for more understanding
```
[https://seleniumsuite.blog/2017/11/07/global-send-keyglobal-send-key-events-in-blueprism/]
```
NOTE: Before sending any global send key event the application must be activated prior
Upvotes: 0 <issue_comment>username_4: You can use "utility-string" VBO and call the "Escape special character string" for example if you pass `"paste(text)"` as input , it will give output as `"paste{(}text{)}"`. You can then use the output dataitem for global send keys action.
Upvotes: 0
|
2018/03/16
| 464 | 1,685 |
<issue_start>username_0: We have a git history that looks like this. This is already pushed to remote but can I possibly merge the branches at certain commits? Or am I limited to only merging the commit at the very top?
[](https://i.stack.imgur.com/i8pTR.png)
where blue is origin/master and magenta is a feature branch that has been physically copied into the master branch.<issue_comment>username_1: Being in the master branch then
git merge "commit-id"
This should do it
Upvotes: 2 <issue_comment>username_2: If possible, a merge between the two branch HEAD is preferable, as it will add a new commit (a merged commit) to the master branch, allowing for an easy (fast-forward) push.
Anything else might change the history, making you do a `git push --force` (which can be problematic if you have other collaborators using the repo)
Upvotes: 2 <issue_comment>username_3: Assume the commit history as below, and you want to merge `feature` branch into commit `B`:
```
...---A---B---C---D master
...---E---F---G---H feature
```
Then you can execute below commands:
```
git checkout commitB
git merge feature --allow-unrelated-histories
```
Assume the merge commit id commit `M` as below commit history:
```
C---D master
/
...---A---B---M
/
...---E---F---G---H feature
```
Then you can execute the commands:
```
git rebase --onto commitM commitB master
git push origin master -f
```
And now the commit history will be:
```
...---A---B---M---C'---D' master
/
...---E---F---G---H feature
```
Upvotes: 4 [selected_answer]
|
2018/03/16
| 700 | 2,512 |
<issue_start>username_0: >
> Exception 'yii\base\InvalidArgumentException' with message 'Response content must not be an array.'
> in C:\xampp1\htdocs\advanced\vendor\yiisoft\yii2\web\Response.php:1054
>
>
>
Stack trace:
>
> 0 C:\xampp1\htdocs\advanced\vendor\yiisoft\yii2\web\Response.php(337): yii\web\Response->prepare()
> ==================================================================================================
>
>
> 1 C:\xampp1\htdocs\advanced\vendor\yiisoft\yii2\base\Application.php(392): yii\web\Response->send()
> ===================================================================================================
>
>
> 2 C:\xampp1\htdocs\advanced\frontend\web\index.php(17): yii\base\Application->run()
> ===================================================================================
>
>
> 3 {main}
> ========
>
>
>
```
SiteController.php
public function actionGetuser()
{
$model = new UsersData();
if(Yii::$app->request->isAjax){
$id = Yii::$app->request->post();
return $model->get($id);
}
}
model:-
function get($id)
{
$model = Yii::$app->db->createCommand("SELECT * FROM user where id=$id");
return $user = $model->queryOne();
}
```<issue_comment>username_1: You need to change format of your response :
**You can modify its configuration by adding an array to your application config under components as it is shown in the following example:**
```
'response' => [
'format' => yii\web\Response::FORMAT_JSON,
'charset' => 'UTF-8',
// ...
]
```
[Check This Link For More info](http://www.yiiframework.com/doc-2.0/yii-web-response.html)
**OR**
```
function get($id)
$result = user::find()->where(['id' => $id])->all();
return Json::encode($result);
}
```
Upvotes: 0 <issue_comment>username_2: I got the solution :-
```
model:-
function get($id)
{
$userid = json_decode($id);
$uid = $userid->id;
$model = Yii::$app->db->createCommand("SELECT * FROM user where id = $uid");
$user = $model->queryOne();
//return $user;
return json_encode($user);
}
controller:-
public function actionGetuser()
{
$model = new UsersData();
//return "Dfasdafsd";
if(Yii::$app->request->isAjax){
$data = Yii::$app->request->post();
$id = json_encode($data);
return $model->get($id);
}
}
```
Upvotes: 1
|
2018/03/16
| 569 | 2,204 |
<issue_start>username_0: I am getting the following error while using the following command
```
mvn clean install package
```
Error
```
No compiler is provided in this environment. Perhaps you are running
on a JRE rather than a JDK?
```
I have tried the solutions already present online but none of them worked, I have my JDK path selected in the installed JRE's on my eclipse.
I am using Ubuntu 16.04. Can someone help?
I have already tried the solution given in [No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK?](https://stackoverflow.com/questions/19655184/no-compiler-is-provided-in-this-environment-perhaps-you-are-running-on-a-jre-ra)
but that didn't help. I already have my JDK path selected correctly<issue_comment>username_1: Check JAVA\_HOME path is set as the environment variable pointing to JDK. Refer maven installation steps for more information.
<https://maven.apache.org/install.html>
Upvotes: 1 <issue_comment>username_2: Give the jdk path in /etc/environment
Eg : JAVA\_HOME="/usr/lib/jvm/java-8-oracle"
then
```
source /etc/environment
```
also add in bashrc
```
vi ~/.bashrc
export JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk-amd64"
source ~/.bashrc
```
Upvotes: 1 <issue_comment>username_3: I had this same exact error.
There are a lot of possible solutions posted for this problem, even more duplicates. However, after checking many of them I did not find the solution. After some tries and investigations among my environment settings I have found out that the issue was caused by having multiple java paths pointing to different versions that were set up by different IDE's. Also, it seems that Maven was trying to use the first version it found in the PATH settings which actually appeared to be a JRE version.
The solution for me was to put the path pointing to Java JDK as the first path in the `PATH` environment settings.
Also, make sure that the new environment settings are in effect before you try to use Maven again.
A very simple way to see if `mvn` actually tries to use the JDK is to execute the following command:
`mvn -version`
It will print out the Java paths along with other info.
Upvotes: 0
|
2018/03/16
| 426 | 1,863 |
<issue_start>username_0: In Dart, you can tell the VM to wait for a `Future` by calling `await`.
The thing is that you can only call `await` in an async function, which returns a ... `Future`.
So if I have a function which doesn't take a long time to run, and has to be run in a function who's type is *not* async, how do I break out of the async chain?<issue_comment>username_1: There is no such thing as breaking out of the async cycle. It's possible to have sync functions to call async code, but the result of the async code won't be available yet when the sync function returns.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The difference between a synchronous function and an asynchronous function is that the former is done when it returns, and the latter is still working in the background when it returns, which is why it returns a `Future` which will complete when it's *really* done.
That is the distinction - an asynchronous function is one that returns a `Future`. The `async` marker is not what makes the function asynchronous, it's just *one* way of implementing an asynchronous function. You can also have functions without the `async` marker which return a `Future`.
You can call an asynchronous function from a synchronous function. However, there is no way for the synchronous function to delay its return, so it must return before the future completes. It can set up a listener on the future, `future.then((value) { doSomethingWith(value); })`, but that listener will certainly only be called after the synchronous function has returned. That `then`-call also returns a future, so the synchronous function has to ignore some `Future`. That's fine. You are allowed to ignore a future when you don't need the result.
Whatever you do, you can't get the *result* of a `Future` in a synchronous function before it returns.
Upvotes: 2
|
2018/03/16
| 1,440 | 5,518 |
<issue_start>username_0: my code opens 2 child windows. Once I perform my operations on each I need to close the windows and switch back to the Parent window.
There is no option as driver.switchToParentWindow.
There is only driver.switchToPreviousWindow.
Eg: I close the 2nd child window --> then driver.switchToPreviousWindow switches control back to 1st child window but when I close this window and do driver.SwitchToPreviousWindow, it searches for the recently closed 2nd child window whereas I want it to switch control to the parent window.
I have tried looking everywhere for a solution, but I can't seem to find one using Selenium VBA to switch back to Parent window.
Following is my code:
```
For a = 9 To LastRow
If Wb.Sheets(DestName).Cells(a, 3).Text = "Report Name" Then
'Checking if cell has 'Report Name'
StoreFile = Wb.Sheets(DestName).Cells(a, 4).Text
Debug.Print StoreFile
'Click on Report
Set myelement = driver.FindElementByLinkText(StoreFile) 'Click on report by name
Debug.Print myelement.Text
If myelement Is Nothing Then
GoTo endTry
ElseIf StoreFile = "CBD_Yoplait" Then
StoreFile = "CBD_Yoplait" & ".Category Buyer Dynamic"
Debug.Print StoreFile
Set myelement = driver.FindElementByLinkText(StoreFile)
myelement.Click
Else
myelement.Click
End If
'1st child window opens
driver.SwitchToNextWindow
Application.Wait (Now + TimeValue("0:0:07"))
'Click on 'Report Home'
Set myelement = driver.FindElementByXPath("//*
[@id=""ribbonToolbarTabsListContainer""]/div[1]/table/tbody/tr/td[3]")
If myelement Is Nothing Then
MsgBox ("no element found")
Else
myelement.Click
End If
'Click on 'Export'
Set myelement = driver.FindElementByXPath("//*
[@id=""RptHomeMenu_""]/tbody/tr/td/div/div[16]/a/div[2]")
If myelement Is Nothing Then
MsgBox ("no element found")
Else
myelement.Click
End If
'Click on 'Excel with Formatting'
Set myelement = driver.FindElementByXPath("//*
[@id=""RptHomeExportMenu_WEB-
INFxmllayoutsblocksHomeExportMenuLayoutxml""]
/tbody/tr/td/div/div[8]/a/div[2]")
If myelement Is Nothing Then
MsgBox ("no element found")
Else
myelement.Click
End If
'Opend 2nd child window
driver.SwitchToNextWindow
Application.Wait (Now + TimeValue("0:0:05"))
'Click on 'Export filter details'
Set myelement = driver.FindElementById("exportFilterDetails")
If myelement Is Nothing Then
MsgBox ("no element found")
Else
myelement.Click
End If
'Click on Export button
Set myelement = driver.FindElementById("3131")
If myelement Is Nothing Then
MsgBox ("no element found")
Else
myelement.Click
End If
Application.Wait (Now + TimeValue("0:0:08"))
FileSpec = StoreFile & ".xls*"
Debug.Print FileSpec
FileName = Dir(MyDir & FileSpec)
Debug.Print FileName
If FileName <> "" Then
MostRecentFile = FileName
MostRecentDate = FileDateTime(MyDir & FileName)
Do While FileName <> ""
If FileDateTime(MyDir & FileName) > MostRecentDate Then
MostRecentFile = FileName
MostRecentDate = FileDateTime(MyDir & FileName)
End If
FileName = Dir
Loop
End If
MyFile = MostRecentFile
Debug.Print MyFile
ChDir MyDir
Set SrcWb = Workbooks.Open(MyDir + MyFile, UpdateLinks:=0)
'Saving as xls workbook
SrcWb.SaveAs DestFolder & MyFile, XlFileFormat.xlExcel8
Application.Wait (Now + TimeValue("0:0:04"))
Application.DisplayAlerts = True
SrcWb.Close
driver.Close
driver.SwitchToPreviousWindow
driver.Close
driver.SwitchToPreviousWindow ( Want to switch back to parent window)
Application.Wait (Now + TimeValue("0:0:08"))
endTry:
End If
Next a
```<issue_comment>username_1: After closing the *1st child window* to switch control to the *parent window* instead of `driver.SwitchToPreviousWindow` a better solution would be to use either of the methods :
* **SwitchToWindowByName()** :
```
///
/// Switch focus to the specified window by name.
///
/// The name of the window to activate
/// Optional timeout in milliseconds
/// Optional - Raise an exception after the timeout when true
/// Current web driver
public Window SwitchToWindowByName(string name, int timeout = -1, bool raise = true) {
try {
return session.windows.SwitchToWindowByName(name, timeout);
} catch (Errors.NoSuchWindowError) {
if (raise)
throw new Errors.NoSuchWindowError(name);
return null;
}
}
```
* **SwitchToWindowByTitle()** :
```
///
/// Switch focus to the specified window by title.
///
/// The title of the window to activate
/// Optional timeout in milliseconds
/// Optional - Raise an exception after the timeout when true
/// Current web driver
public Window SwitchToWindowByTitle(string title, int timeout = -1, bool raise = true) {
try {
return session.windows.SwitchToWindowByTitle(title, timeout);
} catch (Errors.NoSuchWindowError) {
if (raise)
throw new Errors.NoSuchWindowError(title);
return null;
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I took the idea from @DebanjanB's solution, it was just a simple one-line change 'driver.SwitchToWindowByTitle("WindowTitle")' and it worked.
Upvotes: 0
|
2018/03/16
| 539 | 2,063 |
<issue_start>username_0: I'm a bit new to the JavaScript side of things.
I am trying to include a conditional statement in my code so that it only works on screen sizes above 768px
```
$(window).on('scroll', function () {
if ($(window).scrollTop() > 75) {
$('#anim-nav').addClass('bg-fill').removeClass('white').addClass('black');
}
else {
$('#anim-nav').removeClass('bg-fill').removeClass('black').addClass('white');
}
});
```
and include
```
if ($(window).width() > 768)
```<issue_comment>username_1: After closing the *1st child window* to switch control to the *parent window* instead of `driver.SwitchToPreviousWindow` a better solution would be to use either of the methods :
* **SwitchToWindowByName()** :
```
///
/// Switch focus to the specified window by name.
///
/// The name of the window to activate
/// Optional timeout in milliseconds
/// Optional - Raise an exception after the timeout when true
/// Current web driver
public Window SwitchToWindowByName(string name, int timeout = -1, bool raise = true) {
try {
return session.windows.SwitchToWindowByName(name, timeout);
} catch (Errors.NoSuchWindowError) {
if (raise)
throw new Errors.NoSuchWindowError(name);
return null;
}
}
```
* **SwitchToWindowByTitle()** :
```
///
/// Switch focus to the specified window by title.
///
/// The title of the window to activate
/// Optional timeout in milliseconds
/// Optional - Raise an exception after the timeout when true
/// Current web driver
public Window SwitchToWindowByTitle(string title, int timeout = -1, bool raise = true) {
try {
return session.windows.SwitchToWindowByTitle(title, timeout);
} catch (Errors.NoSuchWindowError) {
if (raise)
throw new Errors.NoSuchWindowError(title);
return null;
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I took the idea from @DebanjanB's solution, it was just a simple one-line change 'driver.SwitchToWindowByTitle("WindowTitle")' and it worked.
Upvotes: 0
|
2018/03/16
| 551 | 1,707 |
<issue_start>username_0: I have used like this,
```
DateTime dueDate;
DateTime.TryParse(dataRead["Date required"].ToString(), out dueDate);
list.add(new list { DueDate = dueDate.ToShortDateString() });
```
I have also tried like this,
```
DateTime dueDate;
DateTime.TryParse(dataRead["Date required"].ToString("dd/MM/yyyy"), out dueDate);
list.add(new list { DueDate = dueDate.ToShortDateString() });
```
but it is not reflected. I also changed in access database format as Short date, but that is also not gave correct answer.<issue_comment>username_1: You can use `dueDate.Date`
[DateTime.Date Property](https://msdn.microsoft.com/en-us/library/system.datetime.date(v=vs.110).aspx)
>
> Gets the date component of this instance.
>
>
>
---
**Example**
```
DateTime date1 = new DateTime(2008, 6, 1, 7, 47, 0);
Console.WriteLine(date1.ToString());
// Get date-only portion of date, without its time.
DateTime dateOnly = date1.Date;
// Display date using short date string.
Console.WriteLine(dateOnly.ToString("d"));
// Display date using 24-hour clock.
Console.WriteLine(dateOnly.ToString("g"));
Console.WriteLine(dateOnly.ToString("MM/dd/yyyy HH:mm"));
```
**Results**
```
// The example displays output like the following output:
// 6/1/2008 7:47:00 AM
// 6/1/2008
// 6/1/2008 12:00 AM
//
```
---
[**You can play with a demo here**](https://dotnetfiddle.net/h1rAZD)
Upvotes: 3 [selected_answer]<issue_comment>username_2: **Try this**
```
DateTime dueDateTime = Convert.ToDateTime(row["Date required"].ToString());
DateTime dueDate= dueDateTime.Date;
```
First Convert it into Stting and then to DateTime
and then get Date **from** DateTime
Upvotes: 0
|
2018/03/16
| 1,590 | 5,721 |
<issue_start>username_0: Here is my code.
```
import UIKit
var str = "Hello, playground"
//There are two sorted arrays nums1 and nums2 of size m and n respectively.
//Find the median of the two sorted arrays. The overall run time complexity should be O(log (m+n)).
//Example 1:
//nums1 = [1, 3]
//nums2 = [2]
//
//The median is 2.0
//Example 2:
//nums1 = [1, 2]
//nums2 = [3, 4]
//
//The median is (2 + 3)/2 = 2.5
var num1 = [1,2,2,5]
var num2 = [2,3,9,9]
class Solution {
func findMedianSortedArrays(_ nums1: [Int], _ nums2: [Int]) -> Double {
var A = nums1
var B = nums2
var m = nums1.count
var n = nums2.count
var max_of_left : Int = 0
var min_of_right = 0
if n < m {
var temp : [Int]
var tempt : Int
temp = nums1
tempt = m
A = nums2
B = temp
m = n
n = tempt
}
if n == 0{
fatalError("Arrays must be fulfilled")
}
var imin = 0
var imax = m
let half_len = Int((m+n+1)/2)
while imin <= imax {
let i = Int((imin + imax) / 2)
let j = half_len - i
if i > 0 && A[i-1] > B[j]{
imax = i - 1
}
else if i < m && A[i] < B[j-1]{
imin = i + 1
}
else
{
if i == 0{
max_of_left = B[j-1]
}
else if j == 0{
max_of_left = A[i-1]
}
else
{
max_of_left = max(A[i-1], B[j-1])
}
if m+n % 2 == 1{
return Double(max_of_left)
}
if i==m{
min_of_right = B[j]
}
else if j == n{
min_of_right = A[i]
}
else{
min_of_right = min(A[i], B[j])
//editor indicates error here
}
return Double((Double(max_of_left+min_of_right) / 2.0))
}
}
}
}
var a = Solution()
print(a.findMedianSortedArrays(num1, num2))
```
>
> error: day4\_Median\_of\_Two\_Sorted\_Arrays.playground:86:13: error: missing return in a function expected to return 'Double'
>
>
>
Since I put my `return` out of `if` statement, I think it will be okay because it will stop while looping when it meets return.
But editor says it's not.
I want to know why. Please explain me why.<issue_comment>username_1: Every code path through your findMedianSortedArrays() must return a Double.
So you need a return of a Double placed outside of your while loop. Even if you had every code path within the while loop have a return double, if imin > imax you wouldn't even enter the while loop, and so would need a return of a double outside it.
Upvotes: 2 [selected_answer]<issue_comment>username_2: I fixed it by putting another return out of while loop.
```
//: Playground - noun: a place where people can play
import UIKit
var str = "Hello, playground"
//There are two sorted arrays nums1 and nums2 of size m and n respectively.
//Find the median of the two sorted arrays. The overall run time complexity should be O(log (m+n)).
//Example 1:
//nums1 = [1, 3]
//nums2 = [2]
//
//The median is 2.0
//Example 2:
//nums1 = [1, 2]
//nums2 = [3, 4]
//
//The median is (2 + 3)/2 = 2.5
var num1 = [1,2,2,5]
var num2 = [2,3,9,9]
class Solution {
func findMedianSortedArrays(_ nums1: [Int], _ nums2: [Int]) -> Double {
var A = nums1
var B = nums2
var m = nums1.count
var n = nums2.count
var max_of_left : Int = 0
var min_of_right = 0
if n < m {
var temp : [Int]
var tempt : Int
temp = nums1
tempt = m
A = nums2
B = temp
m = n
n = tempt
}
if n == 0{
fatalError("Arrays must be fulfilled")
}
var imin = 0
var imax = m
let half_len = Int((m+n+1)/2)
while imin <= imax {
let i = Int((imin + imax) / 2)
let j = half_len - i
if i > 0 && A[i-1] > B[j]{
imax = i - 1
}
else if i < m && A[i] < B[j-1]{
imin = i + 1
}
else
{
if i == 0{
max_of_left = B[j-1]
}
else if j == 0{
max_of_left = A[i-1]
}
else
{
max_of_left = max(A[i-1], B[j-1])
}
if m+n % 2 == 1{
return Double(max_of_left)
}
if i==m{
min_of_right = B[j]
}
else if j == n{
min_of_right = A[i]
}
else{
min_of_right = min(A[i], B[j])
}
return Double((Double(max_of_left+min_of_right) / 2.0))
}
}
return Double((Double(max_of_left+min_of_right) / 2.0))
}
}
var a = Solution()
print(a.findMedianSortedArrays(num1, num2))
```
Upvotes: 0
|
2018/03/16
| 842 | 2,873 |
<issue_start>username_0: I'm new to angular. I couldn't figure how to access a variable inside a function(). this is my code
```
mergeImages() {
var imgurl;
var canvas: HTMLCanvasElement = this.canvas.nativeElement;
var context = canvas.getContext('2d');
let img1 = new Image();
let img2 = new Image();
img1.onload = function() {
canvas.width = img1.width;
canvas.height = img1.height;
context.globalAlpha = 1.0;
img2.src = '../assets/sun.jpg';
};
img2.onload = function() {
context.globalAlpha = 1;
context.drawImage(img1, 0, 0);
context.globalAlpha = 0.5; //Remove if pngs have alpha
context.drawImage(img2, 0, 0);
imgurl = canvas.toDataURL("image/jpg");
//console.log(imgurl)
};
img1.src = '../assets/moon.jpg';
}
```
now I need to access "imgurl" from another method
```
printvalue(){
need to access imgurl
}
```
edit 1 - problem is angular cannot find **var a** on **printvalue()** it's only working inside **function something()**<issue_comment>username_1: Convert your code like this
```
let imgurl;
mergeImages() {
var canvas: HTMLCanvasElement = this.canvas.nativeElement;
var context = canvas.getContext('2d');
let img1 = new Image();
let img2 = new Image();
img1.onload = () => {
canvas.width = img1.width;
canvas.height = img1.height;
context.globalAlpha = 1.0;
img2.src = '../assets/sun.jpg';
};
img2.onload = () => {
context.globalAlpha = 1;
context.drawImage(img1, 0, 0);
context.globalAlpha = 0.5; //Remove if pngs have alpha
context.drawImage(img2, 0, 0);
this.imgurl = canvas.toDataURL("image/jpg");
//console.log(imgurl)
};
this.img1.src = '../assets/moon.jpg';
}
```
Upvotes: 1 <issue_comment>username_2: here you want to make a scope variable and access throughout the component,
earlier in angularJS, there was $scope variable where you could access that variable throughout, in the latest angular version you need to use `this` to access throughout.
so you need to try
```
imgurl : string;
mergeImages() {
var canvas: HTMLCanvasElement = this.canvas.nativeElement;
var context = canvas.getContext('2d');
let img1 = new Image();
let img2 = new Image();
img1.onload = () => {
canvas.width = img1.width;
canvas.height = img1.height;
context.globalAlpha = 1.0;
img2.src = '../assets/sun.jpg';
};
img2.onload = () => {
context.globalAlpha = 1;
context.drawImage(img1, 0, 0);
context.globalAlpha = 0.5; //Remove if pngs have alpha
context.drawImage(img2, 0, 0);
this.imgurl = canvas.toDataURL("image/jpg");
//console.log(imgurl)
};
this.img1.src = '../assets/moon.jpg';
}
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,595 | 5,006 |
<issue_start>username_0: Q :Why is not this code working properly?
I have implemented file handling in C++ language.
I have created a complete file of the car. Through this code, we can store new data, delete old one, search a particular data about the car.The problem is that this code compiles successfully but during runtime, its execution stops and display an error message. I request kindly help me in removing runtime error from this code.
CODE:
```
#include
#include
#include
#include
using namespace std;
struct car
{
char name[20];
int model;
string color;
int car\_id;
string size;
float weight;
int price;
};
void getdata(car&); //function declaration
void showdata(car&); //function declaration
void searchdata(); //function declaration
void deleterecord(); //function declaration
void modify(); //function declaration
void readdata(); //function declaration
void writedata(); //function declaration
int main()
{
char ch;
cout<<"\nEnter w to write record ";
cout<<"\nEnter r to read record";
cout<<"\nEnter m to modify record";
cout<<"\nEnter s to search record";
cout<<"\nEnter d to delete record";
cout<<"\n\nEnter your choice :";
cin>>ch;
switch(ch)
{
case 'w':
{
writedata();
break;
}
case 'r':
{
readdata();
break;
}
case 's':
{
searchdata();
break;
}
case 'd':
{
deleterecord();
break;
}
case 'm':
{
modify();
break;
}
default:
{
cout<<"\nWrong choice";
}
}
return 0;
}
void getdata(car &ccc)
{
cout<<"Please enter name of car :";
cin>>ccc.name;
cout<<"Please enter model number of car:";
cin>>ccc.model;
cout<<"Enter color of car:";
cin>>ccc.color;
cout<<"Enter id number of car:";
cin>>ccc.car\_id;
cout<<"Enter size of car :";
cin>>ccc.size;
cout<<"Enter weight of a car :";
cin>>ccc.weight;
cout<<"Enter price of a car :";
cin>>ccc.price;
}
void showdata(car &ccc)
{
cout<<"\nName of car is :";
puts(ccc.name);
cout<<"\nModel number of car is :"<>ch;
}
file.close();
}
void readdata()
{
int count=0;
ifstream file;
car ccc;
file.open("carinformation.dat",ios::binary | ios::in);
if(!file)
{
cout<<"File not found";
exit(0);
}
else
{
file.read((char\*)&ccc,sizeof(ccc));
while(!file.eof())
{
showdata(ccc);
file.read((char\*)&ccc,sizeof(ccc));
count++;
}
cout<<"Number of records are :"<>n\_c;
file.open("carinformation.dat",ios::binary | ios::in);
if(!file)
{
cout<<"File nnot found";
exit(0);
}
else
{
file.read((char\*)&ccc, sizeof(ccc));
while(!file.eof())
{
if(strcmp(n\_c,ccc.name)==0)
{
showdata(ccc);
}
file.read((char\*)&ccc, sizeof(ccc));
}
}
file.close();
}
void modify()
{
car ccc;
fstream file;
char n\_c[20];
file.open("carinformation.dat",ios::binary | ios::in | ios::out);
cout<<"\nEnter name of car that should be searched:";
cin>>n\_c;
if(!file)
{
cout<<"File not found";
exit(0);
}
else
{
file.read((char\*)&ccc,sizeof(ccc));
while(!file.eof())
{
if(strcmp(n\_c,ccc.name)==0)
{
file.seekg(0, ios::cur);
cout<<"Enter new record :\n";
getdata(ccc);
int i=file.tellg();
int j=sizeof(ccc);
int k=i-j;
file.seekp(k);
file.write((char\*)&ccc, sizeof(ccc));
}
}
}
file.read((char\*)&ccc, sizeof(ccc));
file.close();
}
void deleterecord()
{
int count=0;
car ccc;
int c\_id;
cout<<"Please enter car id :";
cin>>c\_id;
ifstream file;
file.open("carinformation.dat" ,ios::binary| ios::in);
ofstream file2;
file2.open("New carinformation.dat", ios::binary| ios::out);
while(file.read((char\*)&ccc,sizeof(ccc)))
{
if(ccc.car\_id!=c\_id)
{
file2.write((char\*)&ccc ,sizeof(ccc));
count++;
}
}
cout<<"Number of records are :"<
```<issue_comment>username_1: Class `car` has `std::string` fields which hold pointers to heap-allocated memory. So saving it to file by writing raw bytes `file.write((char*)&ccc,sizeof(ccc));` is not going to work. And, what is more important, reading them later `file.read((char*)&ccc, sizeof(ccc));` will fill string objects with invalid pointer values. You need to store fields one by one and read them one by one carefully validating input data.
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is a difference between "string" and "string.h". Delete two characters, ".h" and see if your code works.
```
contains functions like strcpy, strlen for C style null-terminated strings.
contains std::string, std::wstring plus other classes.
```
You need to initialize your struct, heres an example for your readdata(), look at the changes and figure out how to fix the rest of the program:
```
void readdata()
{
int count=0;
ifstream file;
car *ccc = new car();
file.open("carinformation.dat",ios::binary | ios::in);
if(!file)
{
cout<<"File not found";
exit(0);
}
else
{
file.read((char*)ccc,sizeof(*ccc));
while(!file.eof())
{
showdata(*ccc);
file.read((char*)ccc,sizeof(*ccc));
count++;
}
cout<<"Number of records are :"<
```
Upvotes: 0
|
2018/03/16
| 637 | 2,821 |
<issue_start>username_0: I was trying to check the file first before I upload it using 'fileuploader',
now on my beforeSend function:
```
beforeSend: function(item, listEl, parentEl, newInputEl, inputEl) {
var file = item.file;
let readfile = function(file){
return new Promise(function(resolve, reject){
console.log(file);
var reader = new FileReader();
var duration = 0;
reader.onload = function() {
var aud = new Audio(reader.result);
aud.onloadedmetadata = function(){
resolve(aud.duration);
};
};
reader.readAsDataURL(file);
});
}
return readfile(file).then(function(res){
if(res>60){
console.log('more than 60');
return false;
}
else{
console.log('uploaded');
return true;
}
});
},
```
My readfile function actually waits for the promises to finish, but the beforeSend function is not pausing before the readfile function is done.
How do I do this please?
I know that its not returning false since an http request is still made even if I return false on my promises.<issue_comment>username_1: Class `car` has `std::string` fields which hold pointers to heap-allocated memory. So saving it to file by writing raw bytes `file.write((char*)&ccc,sizeof(ccc));` is not going to work. And, what is more important, reading them later `file.read((char*)&ccc, sizeof(ccc));` will fill string objects with invalid pointer values. You need to store fields one by one and read them one by one carefully validating input data.
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is a difference between "string" and "string.h". Delete two characters, ".h" and see if your code works.
```
contains functions like strcpy, strlen for C style null-terminated strings.
contains std::string, std::wstring plus other classes.
```
You need to initialize your struct, heres an example for your readdata(), look at the changes and figure out how to fix the rest of the program:
```
void readdata()
{
int count=0;
ifstream file;
car *ccc = new car();
file.open("carinformation.dat",ios::binary | ios::in);
if(!file)
{
cout<<"File not found";
exit(0);
}
else
{
file.read((char*)ccc,sizeof(*ccc));
while(!file.eof())
{
showdata(*ccc);
file.read((char*)ccc,sizeof(*ccc));
count++;
}
cout<<"Number of records are :"<
```
Upvotes: 0
|
2018/03/16
| 507 | 2,021 |
<issue_start>username_0: I am working taking help from the following project:
<https://github.com/spring-cloud/spring-cloud-stream-samples/tree/master/kinesis-samples/kinesis-produce-consume>
The following is the application.yml code snippet :
```
spring:
cloud:
stream:
bindings:
ordersOut:
destination: test_stream
content-type: application/json
producer:
partitionKeyExpression: "1"
ordersIn:
destination: test_stream
content-type: application/json
```
If I want to have include `autoAddShards`, `minShardCount` properties where to add it.<issue_comment>username_1: Class `car` has `std::string` fields which hold pointers to heap-allocated memory. So saving it to file by writing raw bytes `file.write((char*)&ccc,sizeof(ccc));` is not going to work. And, what is more important, reading them later `file.read((char*)&ccc, sizeof(ccc));` will fill string objects with invalid pointer values. You need to store fields one by one and read them one by one carefully validating input data.
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is a difference between "string" and "string.h". Delete two characters, ".h" and see if your code works.
```
contains functions like strcpy, strlen for C style null-terminated strings.
contains std::string, std::wstring plus other classes.
```
You need to initialize your struct, heres an example for your readdata(), look at the changes and figure out how to fix the rest of the program:
```
void readdata()
{
int count=0;
ifstream file;
car *ccc = new car();
file.open("carinformation.dat",ios::binary | ios::in);
if(!file)
{
cout<<"File not found";
exit(0);
}
else
{
file.read((char*)ccc,sizeof(*ccc));
while(!file.eof())
{
showdata(*ccc);
file.read((char*)ccc,sizeof(*ccc));
count++;
}
cout<<"Number of records are :"<
```
Upvotes: 0
|
2018/03/16
| 514 | 2,025 |
<issue_start>username_0: I have a canvas containing art on a transparent background. I desaturate it like this:
```
boardCtx.fillStyle = "rgba(0, 0, 0, 1.0)";
boardCtx.globalCompositeOperation = 'saturation';
boardCtx.fillRect(0, 0, boardCanvas.width, boardCanvas.height);
```
and find that the transparent background has turned opaque black. I wouldn't expect the saturation blend mode to change the alpha channel... am I doing something wrong? My current solution is to copy the canvas before desaturation and use it to mask the black background away from the desaturated copy, but that involves another canvas and a big draw... not ideal.<issue_comment>username_1: Class `car` has `std::string` fields which hold pointers to heap-allocated memory. So saving it to file by writing raw bytes `file.write((char*)&ccc,sizeof(ccc));` is not going to work. And, what is more important, reading them later `file.read((char*)&ccc, sizeof(ccc));` will fill string objects with invalid pointer values. You need to store fields one by one and read them one by one carefully validating input data.
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is a difference between "string" and "string.h". Delete two characters, ".h" and see if your code works.
```
contains functions like strcpy, strlen for C style null-terminated strings.
contains std::string, std::wstring plus other classes.
```
You need to initialize your struct, heres an example for your readdata(), look at the changes and figure out how to fix the rest of the program:
```
void readdata()
{
int count=0;
ifstream file;
car *ccc = new car();
file.open("carinformation.dat",ios::binary | ios::in);
if(!file)
{
cout<<"File not found";
exit(0);
}
else
{
file.read((char*)ccc,sizeof(*ccc));
while(!file.eof())
{
showdata(*ccc);
file.read((char*)ccc,sizeof(*ccc));
count++;
}
cout<<"Number of records are :"<
```
Upvotes: 0
|
2018/03/16
| 687 | 2,477 |
<issue_start>username_0: In the given Images, There are two tables. In the table STUD\_MEMBER, Dept\_ID is a foreign key which is referring to Dept\_ID in table DEPARTMENT.:
[](https://i.stack.imgur.com/xcs2A.png)
So when I add foreign key constraints in mysql at phpmyadmin like
```
CREATE TABLE DEPARTMENT(
Dept_ID INT,
Dept_Name VARCHAR(25));
INSERT INTO DEPARTMENT VALUES(1,"Information Technology");
INSERT INTO DEPARTMENT VALUES(2,"Electrical");
INSERT INTO DEPARTMENT VALUES(3,"Civil");
INSERT INTO DEPARTMENT VALUES(4,"Mechanical");
INSERT INTO DEPARTMENT VALUES(5,"Chemical");
CREATE TABLE STUD_MEMBER(
Roll_No INT NOT NULL PRIMARY KEY,
FName VARCHAR(20),
MName VARCHAR(20),
SName VARCHAR(20),
Dept_ID INT,
FOREIGN KEY (Dept_ID) REFERENCES DEPARTMENT(Dept_ID),
Semester INT,
Contact_No INT,
Gender VARCHAR(6));
```
It is showing an error that 1215- cannot add foreign key constraint. As far as I know, it is the correct way to add a foreign key, I am so confused why it is wrong. Please help in solving this.<issue_comment>username_1: Class `car` has `std::string` fields which hold pointers to heap-allocated memory. So saving it to file by writing raw bytes `file.write((char*)&ccc,sizeof(ccc));` is not going to work. And, what is more important, reading them later `file.read((char*)&ccc, sizeof(ccc));` will fill string objects with invalid pointer values. You need to store fields one by one and read them one by one carefully validating input data.
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is a difference between "string" and "string.h". Delete two characters, ".h" and see if your code works.
```
contains functions like strcpy, strlen for C style null-terminated strings.
contains std::string, std::wstring plus other classes.
```
You need to initialize your struct, heres an example for your readdata(), look at the changes and figure out how to fix the rest of the program:
```
void readdata()
{
int count=0;
ifstream file;
car *ccc = new car();
file.open("carinformation.dat",ios::binary | ios::in);
if(!file)
{
cout<<"File not found";
exit(0);
}
else
{
file.read((char*)ccc,sizeof(*ccc));
while(!file.eof())
{
showdata(*ccc);
file.read((char*)ccc,sizeof(*ccc));
count++;
}
cout<<"Number of records are :"<
```
Upvotes: 0
|
2018/03/16
| 1,384 | 4,545 |
<issue_start>username_0: ```
select table_name,
to_number(extractvalue(xmltype(dbms_xmlgen.getxml('select
count(*) c from '||owner||'.'||table_name)),'/ROWSET/ROW/C')) as count
from all_tables
```
I would like to get those with one record in the table.<issue_comment>username_1: If your query is giving the data you want, but you just want to see the results with `count` equal to 1, you can wrap the query like this:
```
select * from (
select table_name,
to_number(extractvalue(xmltype(dbms_xmlgen.getxml('select
count(*) c from '||owner||'.'||table_name)),'/ROWSET/ROW/C')) as count
from all_tables
) where count = 1
```
Upvotes: 1 <issue_comment>username_2: ```
SELECT * FROM
(
SELECT table_name,
TO_NUMBER(EXTRACTVALUE(XMLTYPE(dbms_xmlgen.getxml('select count(*) c from '||OWNER||'.'||table_name)),'/ROWSET/ROW/C')) AS cnt
FROM all_tables
)
WHERE cnt = 1;
```
or
```
SELECT * FROM
(
SELECT table_name,
TO_NUMBER(EXTRACTVALUE(XMLTYPE(dbms_xmlgen.getxml('select count(*) c from '||OWNER||'.'||table_name)),'/ROWSET/ROW/C')) AS cnt
FROM all_tables
) slct
GROUP BY slct.table_name
HAVING slct.cnt = 1;
```
Upvotes: 0 <issue_comment>username_3: The query crashes only when the filter predicate is applied. It's probably a bug in some query rewrite optimization. If you wrap the query in a block with the `materialize` hint, it seems to bypass this behavior.
```
with workaround as(
select /*+ materialize */
owner
,table_name
,to_number(extractvalue(xmltype(dbms_xmlgen.getxml('select count(*) c from ' || owner || '.' || table_name || ' where rownum <= 2')),'/ROWSET/ROW/C')) as row_count
from all_tables
where owner = ''
)
select owner, table\_name, row\_count
from workaround
where row\_count = 1;
```
I also found some potential to improve the performance of this query. If you only want tables with exactly one record, there is really no need to count every single record in the table. If you add the predicate `rownum <= 2` Oracle will stop scanning as soon as it has found two records. So the count will be either:
* 0, meaning empty table
* 1, meaning exactly one record
* 2, meaning more than 1 record
Edit to show how the optimization work:
```
-- Creating tables
create table t0(c number);
create table t1(c number);
create table t2(c number);
create table t3(c number);
insert into t1 values(1);
insert into t2 values(1);
insert into t2 values(2);
insert into t3 values(1);
insert into t3 values(2);
insert into t3 values(3);
commit;
```
SQL:
```
/*
|| Without rownum you can filter on any rowcount you want
*/
select *
from (select 'T0' as t, count(*) as row_count from t0 union all
select 'T1' as t, count(*) as row_count from t1 union all
select 'T2' as t, count(*) as row_count from t2 union all
select 'T3' as t, count(*) as row_count from t3
)
where row_count = 1 -- Return tables having exactly 1 record.
;
/*
|| With rownum <= 1 Oracle will stop counting after it found one row.
|| So the rowcount will be either 0 or 1.
|| row_count = 0 means that the table is empty
|| row_count = 1 means that the table is NOT empty.
||
|| The Rownum predicate prevents us from knowing if there are 2,3,4 or 5 million records.
*/
select *
from (select 'T0' as t, count(*) as row_count from t0 where rownum <= 1 union all
select 'T1' as t, count(*) as row_count from t1 where rownum <= 1 union all
select 'T2' as t, count(*) as row_count from t2 where rownum <= 1 union all
select 'T3' as t, count(*) as row_count from t3 where rownum <= 1
)
where row_count = 1 -- Return tables having at least one record
;
/*
|| With rownum <= 2 Oracle will stop counting after it found two rows.
|| So the rowcount will be either 0, 1 or 2.
|| row_count = 0 means that the table is empty
|| row_count = 1 means that the table has exactly 1 record
|| row_count = 2 means that the table has more than 1 record
||
|| The Rownum predicate prevents us from knowing if there are exactly two records, or 3,4,5 etcetera
*/
select *
from (select 'T0' as t, count(*) as row_count from t0 where rownum <= 2 union all
select 'T1' as t, count(*) as row_count from t1 where rownum <= 2 union all
select 'T2' as t, count(*) as row_count from t2 where rownum <= 2 union all
select 'T3' as t, count(*) as row_count from t3 where rownum <= 2
)
where row_count = 1 -- Return tables having exactly one record
;
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 846 | 2,139 |
<issue_start>username_0: I can sum the first 310 rows in a 5 column pandas dataframe and get a tidy summary by using:
```
df.[0:310].sum
```
Is there an easy way whereby I can sum the first 310 rows in a *certain column* of my choosing? I just can't figure out how to combine a column selection and row slice selection in the expression. It would be ideal to specify the column by column name, but column index is fine too.
In an attempt to sum the 1st 310 rows of the 5th column, I tried
```
df.iloc[0:310, 4].sum
```
but just got a printout of 310 rows from that column. Thank you.<issue_comment>username_1: You need something like this:
```
import pandas as pd
data = {'x':[1,2,3,4,5], 'y':[2,5,7,9,11], 'z':[2,6,7,3,4]}
df=pd.DataFrame(data)
```
Use list of columns along with rows:
```
df.loc[0:310][['x','z']].sum()
```
output:
```
x 15
z 22
dtype: int64
```
Upvotes: 2 <issue_comment>username_2: I think need [`DataFrame.iloc`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html) for select rows by positions with [`get_indexer`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_indexer.html) for positions of columns by names:
```
#data borrowed from <NAME>kar answer, but changed index values
data = {'x':[1,2,3,4,5],
'y':[2,5,7,9,11],
'z':[2,6,7,3,4]}
df=pd.DataFrame(data, index=list('abcde'))
print (df)
x y z
a 1 2 2
b 2 5 6
c 3 7 7
d 4 9 3
e 5 11 4
a = df.iloc[:3, df.columns.get_indexer(['x','z'])].sum()
```
What is same as:
```
a = df.iloc[:3, [0,2]].sum()
print (a)
x 6
z 15
dtype: int64
```
**Detail**:
```
print (df.iloc[:3, df.columns.get_indexer(['x','z'])])
x z
a 1 2
b 2 6
c 3 7
```
If want only one column use [`get_loc`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_loc.html) for position:
```
b = df.iloc[:3, df.columns.get_loc('x')].sum()
```
What is same as:
```
b = df.iloc[:3, 0].sum()
print (b)
6
```
**Detail**:
```
print (df.iloc[:3, df.columns.get_loc('x')])
a 1
b 2
c 3
Name: x, dtype: int64
```
Upvotes: 2
|
2018/03/16
| 724 | 1,812 |
<issue_start>username_0: I'm working on a Monte-Carlo simulation type problem and need to generate a vector of repeated random numbers, with the matching numbers grouped together, but in random order.
It's easier to explain with an example. If I had:
1, 3, 7, 12, 1, 3, 7, 12, 1, 3, 7, 12
I would like it sorted as:
7, 7, 7, 3, 3, 3, 12, 12, 12, 1, 1, 1 (or with the groups of matching numbers in any order but ascending/descending).
The reason I need the random order is because my MC simulation is for 2 variables, so if both are in order they won't vary independently.
I've got as far as:
```
sort(rep(runif(50,1,10),10), decreasing = FALSE)
```
Which generates 50 random numbers between 1 and 10, repeats each 10 times, then sorts the 50 groups of 10 matching random numbers in ascending order (or it could easily be descending order if I changed "FALSE" to "TRUE"). I just can't figure out the last step of getting 50 groups of 10 matching numbers in random order. Can anyone help?<issue_comment>username_1: Here is one option with `split`
```
unlist(sample(split(v1, v1)), use.names = FALSE)
#[1] 3 3 3 1 1 1 12 12 12 7 7 7
```
---
Or another option is `match` with `unique`
```
v1[order(match(v1, sample(unique(v1))))]
```
### data
```
v1 <- c(1, 3, 7, 12, 1, 3, 7, 12, 1, 3, 7, 12)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: An option could be as:
```
v <- c(1, 3, 7, 12, 1, 3, 7, 12, 1, 3, 7, 12)
lst <- split(v, unique(v))
sapply(sample(seq(length(lst)),length(lst)), function(i)lst[[i]])
# [,1] [,2] [,3] [,4]
#[1,] 3 12 7 1
#[2,] 3 12 7 1
#[3,] 3 12 7 1
#OR for having just a vector
as.vector(sapply(sample(seq(length(lst)),length(lst)), function(i)lst[[i]]))
#[1] 3 3 3 12 12 12 7 7 7 1 1 1
```
Upvotes: 2
|
2018/03/16
| 760 | 2,357 |
<issue_start>username_0: I am writing a socket server using java.nio. As I need my server to use 0 threads I am using `java.nio.channels.Selector`. My code look as follows.
```
while (iterator.hasNext()) {
SelectionKey key = (SelectionKey) iterator.next();
iterator.remove();
if (!key.isValid()) {
continue;
}
if (key.isAcceptable()) { // Accept client connections
this.acceptClient(key);
} else if (key.isReadable()) { // Read from client
this.read(key);
} else if (key.isWritable()) {
this.write(key);
}
}
private void acceptClient(SelectionKey key) throws IOException {
ServerSocketChannel serverChannel = (ServerSocketChannel) key.channel();
SocketChannel channel = serverChannel.accept();
channel.configureBlocking(false);
SocketAddress clientAddress= channel.getRemoteAddress();
//clients is a Hashmap
clients.put(clientAddress, new Client());
clientConnected(clientAddress.toString());
System.out.println("Connected to: " + clientAddress);
channel.register(this.selector, SelectionKey.OP_READ);
}
```
As you can see I am creating a new Client object per accepted client. What I need to do is, relevant Client object to handle their own reading and writing.
My approach is to uniquely identify clients with their address and forward it to relevant Client object.
I think using client address to uniquely identify clients is not a good approach. What is the best way to handle this issue?<issue_comment>username_1: Here is one option with `split`
```
unlist(sample(split(v1, v1)), use.names = FALSE)
#[1] 3 3 3 1 1 1 12 12 12 7 7 7
```
---
Or another option is `match` with `unique`
```
v1[order(match(v1, sample(unique(v1))))]
```
### data
```
v1 <- c(1, 3, 7, 12, 1, 3, 7, 12, 1, 3, 7, 12)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: An option could be as:
```
v <- c(1, 3, 7, 12, 1, 3, 7, 12, 1, 3, 7, 12)
lst <- split(v, unique(v))
sapply(sample(seq(length(lst)),length(lst)), function(i)lst[[i]])
# [,1] [,2] [,3] [,4]
#[1,] 3 12 7 1
#[2,] 3 12 7 1
#[3,] 3 12 7 1
#OR for having just a vector
as.vector(sapply(sample(seq(length(lst)),length(lst)), function(i)lst[[i]]))
#[1] 3 3 3 12 12 12 7 7 7 1 1 1
```
Upvotes: 2
|
2018/03/16
| 508 | 1,678 |
<issue_start>username_0: Unable to parse date 02-Mar-00.
Format is -> dd-MMM-yyyy
```
DateFormat df = new SimpleDateFormat(dateFormat);
df.setLenient(false);
date = df.parse(dateString);
```
The error message -
```
Unparseable date: "02-Mar-00"
java.text.ParseException: Unparseable date: "02-Mar-00"
at java.base/java.text.DateFormat.parse(DateFormat.java:388)
at MyClass.main(MyClass.java:9)
```<issue_comment>username_1: As mentioned in the comments above, this won't work until you match your request to the format that you want to parse to!
```
String dateFormat = "dd-MMM-yy";
String dateString = "02-Mar-00";
DateFormat df = new SimpleDateFormat(dateFormat);
df.setLenient(false);
try {
Date date = df.parse(dateString);
System.out.println(date.toString());
} catch (ParseException e) {
e.printStackTrace();
}
```
This parses the input string to the following output!
```
Input: 02-Mar-00
Output: Thu Mar 02 00:00:00 IST 2000
```
Hope this answers your question well!
Upvotes: 3 [selected_answer]<issue_comment>username_2: first you have to parse the date as @username_1 posted
and then format it again in the desired format. I have made some modifications to it, just check it out
```
String dateFormat = "dd-MMM-yy";
String dateString = "02-Mar-00";
String newFormat = "dd-MMM-yyyy";
DateFormat df = new SimpleDateFormat(dateFormat);
DateFormat displayFormat = new SimpleDateFormat(dateFormat);
df.setLenient(false);
try {
Date date = df.parse(dateString);
System.out.println(displayFormat.format(date));
} catch (ParseException e) {
e.printStackTrace();
}
```
Upvotes: 1
|
2018/03/16
| 708 | 2,132 |
<issue_start>username_0: I have implemented the following code to copy a specific file from zip to a certain target directory.
But this copies entire structure into the target directory.
The code is:
```
import os
import zipfile
zip_filepath='/home/sundeep/Desktop/SCHEMA AUTOMATION/SOURCE/DSP8010_2017.1.zip'
target_dir='/home/sundeep/Desktop/SCHEMA AUTOMATION/SCHEMA'
with zipfile.ZipFile(zip_filepath) as zf:
dirname = target_dir
zf.extract('DSP8010_2017.1/json-schema/AccountService.json',path=dirname)
```
My question is how can I copy only AccountService.json file to target directory but not the entire structure. Any possibility by implementing shutil?<issue_comment>username_1: You can add file name into existing directory like this:-
```
a = 'DSP8010_2017.1/json-schema/AccountService.json'
dirname = target_dir+"/"+(a.split('/')[-1])
```
As you said having issue you can try like this:-
```
import zipfile
zip_filepath='/home/sundeep/Desktop/SCHEMA AUTOMATION/SOURCE/DSP8010_2017.1.zip'
target_dir='/home/sundeep/Desktop/SCHEMA AUTOMATION/SCHEMA'
fantasy_zip = zipfile.ZipFile(zip_filepath)
file = fantasy_zip.extract('AccountService.json', zip_filepath)
target_dir = target_dir+"/"+file
fantasy_zip.close()
```
Upvotes: -1 <issue_comment>username_2: Try this:-
```
import zipfile
zip_filepath='/home/sundeep/Desktop/SCHEMA AUTOMATION/SOURCE/DSP8010_2017.1.zip'
target_dir='/home/sundeep/Desktop/SCHEMA AUTOMATION/SCHEMA'
with zipfile.ZipFile(zip_filepath) as zf:
for file in zf.namelist():
if file.endswith("AccountService.json"):
zf.extract(file,target_dir)
```
Upvotes: 2 <issue_comment>username_3: ```
import os
import shutil
import zipfile
zip_filepath='/home/sundeep/Desktop/SCHEMA AUTOMATION/SOURCE/DSP8010_2017.1.zip'
target_dir='/home/sundeep/Desktop/SCHEMA AUTOMATION/SCHEMA'
with zipfile.ZipFile(zip_filepath) as z:
with z.open('DSP8010_2017.1/json-schema/AccountService.json') as zf, open(os.path.join(target_dir, os.path.basename('AccountService.json')), 'wb') as f:
shutil.copyfileobj(zf, f)
```
Upvotes: 4 [selected_answer]
|
2018/03/16
| 811 | 2,827 |
<issue_start>username_0: I want to set height and width for framelayout , if give values in java code using setlayoutparams its showing very small box, but in XML its showing properly.
This is my java code
```
FrameLayout frameLayout2= (FrameLayout) light.findViewById(R.id.frameLayout2);
ConstraintLayout.LayoutParams lp = new ConstraintLayout.LayoutParams(300, 300);
frameLayout2.setLayoutParams(lp);
```
*this is my xml:*
Note:Constraint layout is my parent
```
```
values given in xml so its showing properly:
[Result using XML](https://i.stack.imgur.com/DQNj5.png)
Values in given java code so its showing very smallbox
[Result using Java](https://i.stack.imgur.com/fQiTR.png)
How can I change the framelayout width and height programatically?
Please help me.<issue_comment>username_1: Try this
```
FrameLayout frameLayout2= (FrameLayout) light.findViewById(R.id.frameLayout2);
ConstraintLayout.LayoutParams oldParams = (ConstraintLayout.LayoutParams) frameLayout2.getLayoutParams();
oldParams.width=400;
oldParams.height=500;
frameLayout2.setLayoutParams(oldParams);
```
OR this
```
FrameLayout frameLayout2= (FrameLayout) light.findViewById(R.id.frameLayout2);
ConstraintLayout.LayoutParams oldParams= new ConstraintLayout.LayoutParams(300,300);
frameLayout2.setLayoutParams(oldParams);
```
Upvotes: 1 <issue_comment>username_2: Use this
```
FrameLayout frameLayout2= (FrameLayout) light.findViewById(R.id.frameLayout2);
LinearLayout.LayoutParams lp = new LinearLayout.LayoutParams(300, 300);
frameLayout2.setLayoutParams(lp);
```
Upvotes: 0 <issue_comment>username_3: Happened with me as well.
After spending so much time, I got to know that on setting height and width programmatically it takes in `PX (pixel)` not in dp which we set in `XML`.
So you need to convert dimension into PX from dp first using display factor like this.
```
val scale: Float = resources.displayMetrics.density
val resizedInPx = (widthInDp * scale + 0.5f).toInt() // dp to px
```
Where `widthInDp` is the desired dp you want to set like 300
**Usage**
```
val params: ViewGroup.LayoutParams = yourlayout.layoutParams
params.width = (widthInDp * scale + 0.5f).toInt() // dp to px
params.height = (heightInDp * scale + 0.5f).toInt() // dp to px
layout!!.setLayoutParams(params)
```
Upvotes: 1 <issue_comment>username_4: (KOTLIN) If you need to update the width and height only, you can use the below method:
```
private fun resizeView(view: View) {
val size = resources.getDimension(R.dimen.view_size).toInt()
view.layoutParams.width = size
view.layoutParams.height = size
//or (but be careful because this will change all the constraints you added in the xml file)
val newLayoutParams = ConstraintLayout.LayoutParams(size, size)
view.layoutParams = newLayoutParams
}
```
Upvotes: 0
|
2018/03/16
| 594 | 2,073 |
<issue_start>username_0: In order not to violate policy of Google AdMob AD, I add my physics phone to test device using Code A.
I get the device string "Samsung SM-J5008" of the physics phone using the following UI when I select a device to run the App.
I don't know whether the string "Samsung SM-J5008" is correct, and more the Google AD is still displayed even if I have added the code `.addTestDevice("Samsung SM-J5008")`
**And More**
I don't need to remove addTestDevice() when I publish my app, all physical phones will display AD except these physical phones who's name string listed in addTestDevice(), is it right?
[](https://i.stack.imgur.com/0BQ2w.png)
**Code A**
```
val adRequest = AdRequest.Builder()
.addTestDevice(AdRequest.DEVICE_ID_EMULATOR)
.addTestDevice("Samsung GT-I9268")
.addTestDevice("Samsung SM-J5008")
.build()
adView.loadAd(adRequest)
```<issue_comment>username_1: **If you are running admob ads on an emulator then there is no ID. just use the AdManager method and set it to TEST\_EMULATOR like the logcat says. If you run on an actual device with usb debugging and watch the logcat, the ID will appear in there.You need to add device ID As :**
```
String android_id = Settings.Secure.getString(this.getContentResolver(), Settings.Secure.ANDROID_ID);
String deviceId = md5(android_id).toUpperCase();
Log.i("device id=",deviceId);
```
Upvotes: 0 <issue_comment>username_2: When you run the project, if you monitor the LogCat, you can find a similar line **Use AdRequest.Builder.addTestDevice(“C04B1BFFB0774708339BC273F8A43708”) to get test ads on this device**. Copy the device id and add it to AdRequest.
**Note** that this ID varies from device to device, By doing this, the test ads will be loaded instead of live ads.
In **production** you need to make sure that you removed `addTestDevice()` methods in order to render the live ads and start monetization.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 526 | 1,557 |
<issue_start>username_0: This is my function i want to change my date format
like this **m/d/y**
```
foreach ($totals as $item => $val) {
$result['totals_labels'][] = $item;
$result['totals_values'][] = $val;
$result['details_labels'][] = $item;
$totals['date'] = date("m/d/Y",strtotime($totals['date'])).'
';
print_r($totals['date']);
}
$result['details'] = $data;
print_r($result);
exit();
```
**print\_r($totals['date']); Response**
07/19/5801
12/31/1969
12/31/1969
12/31/1969
12/31/1969
12/31/1969
12/31/1969
Array
**print\_r($result['details']) Response**
```
[total] => 500
[good] => 401
[bad] => 99
[duration] => 4.67320
[percentGood] => 80.2
[date] => 1521086400
[time] => 1521086400
[peak] => 401
```
in Date format not changing into **$result['details']** response<issue_comment>username_1: Use [**Strtotime**](http://php.net/manual/en/function.strtotime.php).
it Parse about any English textual datetime description into a Unix timestamp
Change From
```
date("m/d/Y",$totals['date']);
```
To
```
date("m/d/Y",strtotime($totals['date']));
```
Upvotes: 2 <issue_comment>username_2: ```
php
$date=date_create("2018-03-15");
echo date_format($date,"m/d/y H:i:s");
?
```
date\_format() method returns a date formatted according to the specified format.
Upvotes: 0
|
2018/03/16
| 217 | 709 |
<issue_start>username_0: I'm using AudioUnit to Play and Record,when i set `kAudioUnitSubType_VoiceProcessingIO`, the sound is lower than RemoteIO, Why? Who can tell me how to change this issue?<issue_comment>username_1: Use [**Strtotime**](http://php.net/manual/en/function.strtotime.php).
it Parse about any English textual datetime description into a Unix timestamp
Change From
```
date("m/d/Y",$totals['date']);
```
To
```
date("m/d/Y",strtotime($totals['date']));
```
Upvotes: 2 <issue_comment>username_2: ```
php
$date=date_create("2018-03-15");
echo date_format($date,"m/d/y H:i:s");
?
```
date\_format() method returns a date formatted according to the specified format.
Upvotes: 0
|
2018/03/16
| 625 | 1,750 |
<issue_start>username_0: i am new in c,so i want to input string at 2-d array,but don't print string.what's wrong in this code and how can i fixed this problem.thanks in advanced
```
#include
#include
int main()
{
char col[100][100];
int i,j;
for(i=0; i<2; i++)
{
for(j=0; j<2; j++)
{
scanf("%s",
&col[i][j]);
}
}
for(i=0; i<2; i++)
{
for(j=0; j<2; j++)
{
printf("%s\t",col[i][j]);
}
printf("\n");
}
return 0;
}
```<issue_comment>username_1: You're printing a char with `%s`. Change instead to `%c`.
```
printf("%c\t", col[i][j]);
```
With `%s` `printf` will print all characters until a `\0` is found and in your case there's none which will lead to unexpected behavior.
Upvotes: 1 <issue_comment>username_2: In a 2D character-array, each element `col[i][j]` is a character. But , you're taking string and printing string using the `printf` and `scanf` statements. You need to change the following:
```
scanf("%c",&col[i][j]);
```
and
```
printf("%c\t",col[i][j]);
```
Upvotes: 1 <issue_comment>username_3: Use `%c` instead of `%s` because `%s` is a identifier of String. What I found in your code is that you said you want string as input but you are taking char by char as a input. I will suggest you to use pointer to take String as a input in 2D char Array.
**Taking String as a input in 2D char Array**
```
#include
#include
int main() {
char \*s[100];
char s1[100];
int i;
for(i=0;i<5;i++){
scanf("%s", s1);
s[i]=strdup(s1);
}
for(i=0;i<5;i++){
printf("%s\n", s[i]);
}
return 0;
}
```
What I am doing is that I am taking the input from user in s1 and put that in the 2D char array. By using this method you have put each string in a each row.
Hopefully that help.
Upvotes: 1 [selected_answer]
|
2018/03/16
| 662 | 2,095 |
<issue_start>username_0: I am trying to pull over end-of-day share prices going back in time. The code below provides exactly what I need but it appears that that the year/month/day parameters do not work.
```
import requests
params={'q': 'NASDAQ:AAPL', 'expd': 10, 'expm': 3, 'expy': 2018, 'output': 'json'}
response = requests.get('https://finance.google.com/finance', params=params, allow_redirects=False, timeout=10.0)
print(response.content)
```
The closing price for this is `"l" : "178.65"` which is the most recent closing price (15 March) and not for 10 March as specified. I am assuming I cannot rely on this service as it is no longer supported by Google but would be good if someone can confirm if I am correct around the dates not working or if I am missing something.<issue_comment>username_1: You're printing a char with `%s`. Change instead to `%c`.
```
printf("%c\t", col[i][j]);
```
With `%s` `printf` will print all characters until a `\0` is found and in your case there's none which will lead to unexpected behavior.
Upvotes: 1 <issue_comment>username_2: In a 2D character-array, each element `col[i][j]` is a character. But , you're taking string and printing string using the `printf` and `scanf` statements. You need to change the following:
```
scanf("%c",&col[i][j]);
```
and
```
printf("%c\t",col[i][j]);
```
Upvotes: 1 <issue_comment>username_3: Use `%c` instead of `%s` because `%s` is a identifier of String. What I found in your code is that you said you want string as input but you are taking char by char as a input. I will suggest you to use pointer to take String as a input in 2D char Array.
**Taking String as a input in 2D char Array**
```
#include
#include
int main() {
char \*s[100];
char s1[100];
int i;
for(i=0;i<5;i++){
scanf("%s", s1);
s[i]=strdup(s1);
}
for(i=0;i<5;i++){
printf("%s\n", s[i]);
}
return 0;
}
```
What I am doing is that I am taking the input from user in s1 and put that in the 2D char array. By using this method you have put each string in a each row.
Hopefully that help.
Upvotes: 1 [selected_answer]
|
2018/03/16
| 373 | 1,387 |
<issue_start>username_0: Anyone knows how to create a code to delete duplicate product data in Hybris? either using an impex script or modifying the code?<issue_comment>username_1: >
> Question: **How to delete Hybris Products having a duplicate name?**
>
>
>
You can run **SQL / flexible query** to find duplicate products and then delete those list of pk using SQL query.
List of pks to be removed
-------------------------
```
select MIN({p.pk}) as pks
from {Product! as p
JOIN CatalogVersion as CV on {p.catalogversion}={CV:PK} and {CV:version} = 'Online'
JOIN Catalog as C on {CV:catalog}={C:PK} and {C:id}='myProductCatalog'
}
group by {p:name}
having
count(*) > 1
```
---
Run the remove query
--------------------
Remove all pks get from above query. Repeat this for the `Online` version as well.
---
You can find [detail steps here](https://hybrisdeveloper.blogspot.in/2018/03/delete-hybris-products-having-duplicate.html)
--------------------------------------------------------------------------------------------------------------------------
Upvotes: 2 <issue_comment>username_2: First of all, find the duplicated CMSItems using following flexiblesearch
```
SELECT {UID} as uid,{CatalogVersion} as CatalogVersion , count(1) as cnt from {CMSItem} GROUP BY {UID},{CatalogVersion} HAVING COUNT(1) > 1
```
then delete the duplicated ones.
Upvotes: 0
|
2018/03/16
| 1,075 | 4,377 |
<issue_start>username_0: I am trying to read values from BLE device. Steps I followed:
1. I am able to discover the BLE device and connect to it.
2. I am able to find the required characteristic by parsing through the
services and get the GattCharacteristic.
3. I am able to write a value to the BLE device characteristic and that's
verified.
4. The properties for that characteristic that I am trying to read the
value from are: READ, WRITE and INDICATE/NOTIFY.
The functions I am using for read and write are as follows:
a) Read function:
```
public void readCharacteristic(BluetoothGattCharacteristic characteristic)
{
if (mBluetoothAdapter == null || mBluetoothGatt == null) {
Log.w(TAG, "BluetoothAdapter not initialized");
return;
}
boolean status;
Log.w(TAG, "SCBABLe: Writing BLuetooth");
status=mBluetoothGatt.readCharacteristic(characteristic);
if(status) {
Log.w(TAG, "Read Successfully");
}
else
{
Log.w(TAG, "Read Unsuccessfully");
}
}
```
b) Write function
```
public void writeCharacteristic(BluetoothGattCharacteristic characteristic)
{
if (mBluetoothAdapter == null || mBluetoothGatt == null)
{
Log.w(TAG, "BluetoothAdapter not initialized");
return;
}
boolean status;
Log.w(TAG, "SCBABLe: Writing BLuetooth");
status=mBluetoothGatt.writeCharacteristic(characteristic);
if(status) {
Log.w(TAG, "Write Successfully");
}
else
{
Log.w(TAG, "Write Unuccessfully");
}
}
```
I am calling the above two from a different activity, after I get the required characteristic(by matching the UUIDs).
```
//Current Activity Name: DeviceControlActivity
//BluetoothLeService is the other activity where Read and write are defined
private static final DeviceControlActivity holder1 = new DeviceControlActivity();
public BluetoothGattCharacteristic reqChar;
public BluetoothLeService reqService;
private BluetoothLeService mBluetoothLeService;
private void displayGattServices(List gattServices)
{
if (gattServices == null) return;
String uuid = null;
for (BluetoothGattService gattService : gattServices)
{
uuid = gattService.getUuid().toString();
// Loops through available Characteristics.
for (BluetoothGattCharacteristic gattCharacteristic : gattCharacteristics)
{
uuid = gattCharacteristic.getUuid().toString();
//SampleGattAttributes.UNKNOWN\_CHARACTERISTIC is the hardcoded uuid against which i am checking
if((uuid.equals((SampleGattAttributes.UNKNOWN\_CHARACTERISTIC))))
{
holder1.reqChar = gattCharacteristic;
holder1.reqService = mBluetoothLeService;
//Call for write
byte [] byte1= {0x01, 0x10};
holder1.reqChar.setValue(byte1);
holder1.reqService.writeCharacteristic(holder1.reqChar);
//Call for read
holder1.reqService.readCharacteristic(holder1.reqChar);
```
**Result:** Read function is returning false and write function is returning true so the value is getting written successfully for the required characteristic.(verified it)
**Please, could anyone help and tell why the read is not getting executed? Why is it still returning false when it has Read as property and proper value defined?**<issue_comment>username_1: The problem is in these few lines.
```
holder1.reqService.writeCharacteristic(holder1.reqChar);
//Call for read
holder1.reqService.readCharacteristic(holder1.reqChar);
```
After calling writeCharacteristic, if you do any extra read/write/etc. , the call will not be executed and return false. You have to wait for BluetoothGattCallback.onCharacteristicWrite before doing further operation.
Upvotes: 2 <issue_comment>username_2: In the Android BLE implementation, the gatt operation calls need to be queued so that only one operation (read, write, etc.) is in effect at a time. So for example, after `gatt.readCharacteristic(characteristicX)` is called, you need to wait for the gatt callback`BluetoothGattCallback.onCharacteristicRead()` to indicate the read is finished. If you initiate a second gatt.readCharacteristic() operation before the previous one completes, the second one will fail (by returning false) This goes for all of the gatt.XXX() operations.
Its a little work, but I think the best solution is to create a command queue for all the gatt operations and run them one at a time. You can use the command pattern to accomplish this.
Upvotes: 0
|
2018/03/16
| 519 | 1,612 |
<issue_start>username_0: ```
bool isAnagram(string s1, string s2){
if(s1.length() != s2.length())
return false;
for(int i =0; i
```
I write the code about checking Anagram but ignore the requirement of ignoring difference between upper and lower. I have no idea about that. Can you help me? Thanks!<issue_comment>username_1: Convert both to lowercase before comparing
```
std::transform(s1.begin(), s1.end(), s1.begin(), ::tolower);
std::transform(s2.begin(), s2.end(), s2.begin(), ::tolower);
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: The easy way is set both string to lower case(or upper case), and then sort both of them. If `s1` and `s2` is anagram to each other, then the result should be same.
Upvotes: 0 <issue_comment>username_3: you can use algorithm library to convert uppercase to lowercase.
before looping add below line to convert it to lower case.
```
#include
#include
std::transform(s1.begin(), s1.end(), s1.begin(), ::tolower);
std::transform(s2.begin(), s2.end(), s2.begin(), ::tolower);
```
Upvotes: 0 <issue_comment>username_4: You can simply use the power of the STL `algorithm`:
```
#include
#include
using namespace std;
bool isSameChar(int a, int b)
{
return tolower(a) == tolower(b);
}
bool isAnagram(const string& a, const string& b)
{
return a.size() == b.size()
&& is\_permutation(a.begin(), a.end(), b.begin(), isSameChar);
}
int main()
{
cout << isAnagram("AbcDE", "ebCda") << endl; // true
cout << isAnagram("AbcDE", "ebCdaa") << endl; // false
cout << isAnagram("AbcDE", "ebCde") << endl; // false
return 0;
}
```
Upvotes: 0
|
2018/03/16
| 882 | 2,812 |
<issue_start>username_0: I made a little program that finds all prime numbers from 0 - 5000, but the program crashes after about a number of recursions which gives me this error:
at main.LargePrimeNumber.recursion(LargePrimeNumber.java:37)
What is causing this? My first guess is that my RAM fills up and crashes, but that doesn't make too much sense seeing as everything else is still running. My second idea is that my IDE (eclipse neon) has a max limit or that recursion overloads my IDE so it throws errors.
Below is my code
```
import java.util.ArrayList;
public class LargePrimeNumber {
static ArrayList PrimeDataBase = new ArrayList();
static int cd = 12; //current digit that is being compared to see if it's prime
public static void main(String[] args){
PrimeDataBase.add(2);
PrimeDataBase.add(3);
PrimeDataBase.add(5);
PrimeDataBase.add(7);
PrimeDataBase.add(11);
recursion(PrimeDataBase.size()-1);
}
public static void recursion(int dbsize){
boolean prime = true;
for(int i = 0; i
```
Here are the outputs:
```
13 is prime
17 is prime
19 is prime
23 is prime
29 is prime
31 is prime
37 is prime
41 is prime
43 is prime
47 is prime
53 is prime
59 is prime
...
4271 is prime
4273 is prime
4283 is prime
Exception in thread "main" java.lang.StackOverflowError
at java.lang.Integer.toString(Unknown Source)
at java.lang.String.valueOf(Unknown Source)
at main.LargePrimeNumber.recursion(LargePrimeNumber.java:32)
... // it gives me like 50 of ^ these errors
```<issue_comment>username_1: Convert both to lowercase before comparing
```
std::transform(s1.begin(), s1.end(), s1.begin(), ::tolower);
std::transform(s2.begin(), s2.end(), s2.begin(), ::tolower);
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: The easy way is set both string to lower case(or upper case), and then sort both of them. If `s1` and `s2` is anagram to each other, then the result should be same.
Upvotes: 0 <issue_comment>username_3: you can use algorithm library to convert uppercase to lowercase.
before looping add below line to convert it to lower case.
```
#include
#include
std::transform(s1.begin(), s1.end(), s1.begin(), ::tolower);
std::transform(s2.begin(), s2.end(), s2.begin(), ::tolower);
```
Upvotes: 0 <issue_comment>username_4: You can simply use the power of the STL `algorithm`:
```
#include
#include
using namespace std;
bool isSameChar(int a, int b)
{
return tolower(a) == tolower(b);
}
bool isAnagram(const string& a, const string& b)
{
return a.size() == b.size()
&& is\_permutation(a.begin(), a.end(), b.begin(), isSameChar);
}
int main()
{
cout << isAnagram("AbcDE", "ebCda") << endl; // true
cout << isAnagram("AbcDE", "ebCdaa") << endl; // false
cout << isAnagram("AbcDE", "ebCde") << endl; // false
return 0;
}
```
Upvotes: 0
|
2018/03/16
| 622 | 1,987 |
<issue_start>username_0: I want to set image and title both on rightBarButton. Title will come first. I searched a lot but all the articles are related to leftBarButton. So can anyone please tell me, how to set both of them?
```
func methodForSettingButtonClicked() {
let buttonSetting = UIButton(type: .custom)
buttonSetting.frame = CGRect(x: 0, y: 5, width: 25, height: 25)
let imageReg = UIImage(named: "setting")
buttonSetting.setTitle("Settings", for: .normal)
buttonSetting.setImage(imageReg, for: .normal)
let leftBarButton = UIBarButtonItem()
leftBarButton.customView = buttonSetting
self.navigationItem.rightBarButtonItem = leftBarButton;
}
```<issue_comment>username_1: Convert both to lowercase before comparing
```
std::transform(s1.begin(), s1.end(), s1.begin(), ::tolower);
std::transform(s2.begin(), s2.end(), s2.begin(), ::tolower);
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: The easy way is set both string to lower case(or upper case), and then sort both of them. If `s1` and `s2` is anagram to each other, then the result should be same.
Upvotes: 0 <issue_comment>username_3: you can use algorithm library to convert uppercase to lowercase.
before looping add below line to convert it to lower case.
```
#include
#include
std::transform(s1.begin(), s1.end(), s1.begin(), ::tolower);
std::transform(s2.begin(), s2.end(), s2.begin(), ::tolower);
```
Upvotes: 0 <issue_comment>username_4: You can simply use the power of the STL `algorithm`:
```
#include
#include
using namespace std;
bool isSameChar(int a, int b)
{
return tolower(a) == tolower(b);
}
bool isAnagram(const string& a, const string& b)
{
return a.size() == b.size()
&& is\_permutation(a.begin(), a.end(), b.begin(), isSameChar);
}
int main()
{
cout << isAnagram("AbcDE", "ebCda") << endl; // true
cout << isAnagram("AbcDE", "ebCdaa") << endl; // false
cout << isAnagram("AbcDE", "ebCde") << endl; // false
return 0;
}
```
Upvotes: 0
|
2018/03/16
| 186 | 603 |
<issue_start>username_0: ```
import random
import timeit
data = [random.randint(-10,10) for i in range(10)]
timeit.timeit('filter(lambda x : x>=0 ,"data")')
```
As shown in the code.
if I try to remove the "" from "data", it would throw an error.
Why?
There is still a single quote containing the whole filter line.
Thank you for helping!!<issue_comment>username_1: data is string and it should be under quote "" unless you store it in another variable like
```
data="data"
```
Upvotes: -1 <issue_comment>username_2: ```
timeit.timeit('filter(lambda x : x >= 0, {})'.format(data))
```
Upvotes: 0
|
2018/03/16
| 778 | 2,745 |
<issue_start>username_0: I have 3 tables:
* **Emp** (Id(PK), Name)
* **Address** (AddressId(PK), AddressType)
* **EmpAddress** (EmpId(FK), AddresId(FK))
One employee may have multiple address.
Sample data:
**Emp**
```
1 abc
2 pqr
```
**Address**
```
1 a
2 b
3 c
```
**EmpAddress**
```
1 1
1 2
1 3
```
Here empid 1 has all 3 addresses.
I want the only one address at a time based on availability.
* If adresstype a is available then display only a
* If adresstype c is available then display only c
* If adresstype b is available then display only b
Priority is a->c->b
If only one available then display that without any Priority .
I wrote this query, but it is not working:
```
select *
from Emp
inner join EmpAddress on Emp.Id = .EmpAddress .Emp
inner join Address on Address.Id = EmpAddress.Address_Id
where AddressType is NOT NULL
and AddressType = case
when AddressType = 'a' then 'a'
when AddressType = 'c' then 'c'
when AddressType = 'b' then 'b'
end
```<issue_comment>username_1: ```
select *
from Emp e
cross apply
(
select top 1 ea.AddresId, a.AddressType
from EmpAddress ea
inner join Address a on ea.AddresId = a.AddresId
where ea.EmpId = e.Id
order by case a.AddressType
when 'a' then 1
when 'c' then 2
when 'b' then 3
end
) a
```
Upvotes: 0 <issue_comment>username_2: One approach would be to use `ROW_NUMBER()` to assign a numerical priority to each address type based on the ordering `a > c > b`. Then, subquery to retain only the highest ranking address for each employee.
```
SELECT Id, Emp, AddressType
FROM
(
SELECT e.Id, ea.Emp, a.AddressType,
ROW_NUMBER() OVER (PARTITION BY e.Id
ORDER BY CASE WHEN a.AddressType = 'a' THEN 1
WHEN a.AddressType = 'b' THEN 2
WHEN a.AddressType = 'c' THEN 3 END) rn
FROM Emp e
INNER JOIN EmpAddress ea
ON e.Id = ea.Emp
INNER JOIN Address a
ON a.Id = ea.Address_Id
) t
WHERE t.rn = 1;
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: You could also retrieve the records based on Priority via *TOP(1)* with `TIES`
```
SELECT
TOP(1) with TIES e.Id, *
FROM Emp e
INNER JOIN EmpAddress ea ON e.Id = ea.Emp
INNER JOIN Address a ON a.Id = ea.Address_Id
ORDER BY ROW_NUMBER() OVER (PARTITION BY e.Id ORDER BY
CASE (a.AddressType) WHEN 'a' THEN 1
WHEN 'c' THEN 2 WHEN 'b' THEN 3 END)
```
Upvotes: 0
|
2018/03/16
| 650 | 2,409 |
<issue_start>username_0: I use AutoCompleteTextView for suggesting data based on what I entered.
I want to change the character color of text entered and show suggestions in a drop-down list.
[](https://i.stack.imgur.com/YE6ip.png)
How can I change the suggested word's text color or change its background to match the edit text?
```
```
and
```
autocomplete_country = (AutoCompleteTextView) findViewById(R.id.autocomplete_country);
ArrayList options = new ArrayList();
options.add("america");
options.add("england");
options.add("australia");
ArrayAdapter adapter = new ArrayAdapter(this, R.layout.spinner\_tv, options);
autocomplete\_country.setAdapter(adapter);
```<issue_comment>username_1: ```
select *
from Emp e
cross apply
(
select top 1 ea.AddresId, a.AddressType
from EmpAddress ea
inner join Address a on ea.AddresId = a.AddresId
where ea.EmpId = e.Id
order by case a.AddressType
when 'a' then 1
when 'c' then 2
when 'b' then 3
end
) a
```
Upvotes: 0 <issue_comment>username_2: One approach would be to use `ROW_NUMBER()` to assign a numerical priority to each address type based on the ordering `a > c > b`. Then, subquery to retain only the highest ranking address for each employee.
```
SELECT Id, Emp, AddressType
FROM
(
SELECT e.Id, ea.Emp, a.AddressType,
ROW_NUMBER() OVER (PARTITION BY e.Id
ORDER BY CASE WHEN a.AddressType = 'a' THEN 1
WHEN a.AddressType = 'b' THEN 2
WHEN a.AddressType = 'c' THEN 3 END) rn
FROM Emp e
INNER JOIN EmpAddress ea
ON e.Id = ea.Emp
INNER JOIN Address a
ON a.Id = ea.Address_Id
) t
WHERE t.rn = 1;
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: You could also retrieve the records based on Priority via *TOP(1)* with `TIES`
```
SELECT
TOP(1) with TIES e.Id, *
FROM Emp e
INNER JOIN EmpAddress ea ON e.Id = ea.Emp
INNER JOIN Address a ON a.Id = ea.Address_Id
ORDER BY ROW_NUMBER() OVER (PARTITION BY e.Id ORDER BY
CASE (a.AddressType) WHEN 'a' THEN 1
WHEN 'c' THEN 2 WHEN 'b' THEN 3 END)
```
Upvotes: 0
|
2018/03/16
| 2,520 | 8,473 |
<issue_start>username_0: I have a hash which has many hash inside and the value may be an array and this array consist of many hashes, I wanted to print all key value pair, If values is array, then it has to print
```
"pageOfResults": "Array" # I don't want actual array here, I want the string "Array" to be printed.
```
Of If hash follows, then it needs to print
```
"PolicyPayment": "Hash"
```
else It needs to print key and value
```
Key -> "CurrencyCode":
Value-> "SGD",
```
*Hash follows*
```
a={
"PageOfResults": [
{
"CurrencyCode": "SGD",
"IpAddress": nil,
"InsuranceApplicationId": 6314,
"PolicyNumber": "SL10032268",
"PolicyPayment": {
"PolicyPaymentId": 2188,
"PaymentMethod": "GIRO"
},
"InsuranceProductDiscountDetail": nil,
"ProductDetails": {
"Results": [
{
"Id": 8113,
"InsuranceProductId": 382,
"ApplicationProductSelectedId": 62043,
},
"InsuranceProduct": {
"InsuranceProductId": 382,
"ProductCode": "TL70-90",
},
],
},
}
]
}
```
Program I have written to print these values
```
a.each do |key,value|
puts "key->" + key.to_s
if value.is_a?Array
value.each do |v|
v.each do |key,value|
puts "key->"+key.to_s
puts "value->"+value.to_s
end
end
else
puts "value->"+value.to_s
end
```
This program is printing first of level of hash and it's values, I can do recursive calls to print all values as well,
But my question is, Is there any way we can write Ruby style of code to accomplish this easily? Or any better way?<issue_comment>username_1: I used the recursion
```
def traverse_multid_array(arr)
arr.each do |ele|
if ele.class == Array
traverse_multid_array(ele)
elsif ele.class == Hash
traverse_nested_hash(ele)
end
end
end
def traverse_nested_hash(hash)
hash.each do |key,value|
if value.class == Array
puts "#{key} => Array"
traverse_multid_array(value)
elsif value.class == Hash
puts "#{key} => Hash"
traverse_nested_hash(value)
else
puts "Key => #{key}"
puts "Value => #{value}"
end
end
end
traverse_nested_hash(a)
```
Output Result:
```
PageOfResults => Array
Key => CurrencyCode
Value => SGD
Key => IpAddress
Value =>
Key => InsuranceApplicationId
Value => 6314
Key => PolicyNumber
Value => SL10032268
PolicyPayment => Hash
Key => PolicyPaymentId
Value => 2188
Key => PaymentMethod
Value => GIRO
Key => InsuranceProductDiscountDetail
Value =>
ProductDetails => Hash
Results => Array
Key => Id
Value => 8113
Key => InsuranceProductId
Value => 382
Key => ApplicationProductSelectedId
Value => 62043
InsuranceProduct => Hash
Key => InsuranceProductId
Value => 382
Key => ProductCode
Value => TL70-90
```
Upvotes: 1 <issue_comment>username_2: You can refine `Hash::each` as described [here](https://stackoverflow.com/a/49167393), i.e. add this code:
```
module HashRecursive
refine Hash do
def each(recursive=false, █)
if recursive
Enumerator.new do |yielder|
self.map do |key, value|
value.each(recursive=true).map do |key_next, value_next|
yielder << [[key, key_next].flatten, value_next]
end if value.is_a?(Hash)
yielder << [[key], value]
end
end.entries.each(█)
else
super(█)
end
end
alias_method(:each_pair, :each)
end
end
using HashRecursive
```
After that, this code will do *exactly* what you've asked for:
```
def whatsInside(hashOrArray)
hashOrArray.each(recursive=true) do |key, value|
type = value.class.to_s
case type
when "Array", "Hash"
puts key.pop.to_s.inspect+": "+type.inspect
value.each do |valueInArray|
whatsInside(valueInArray)
end if value.is_a?(Array)
else
puts "Key -> "+key.pop.to_s.inspect+":"
puts "Value-> "+value.inspect+","
end
end
end
whatsInside(a)
```
The output is following:
```
Key -> "CurrencyCode":
Value-> "SGD",
Key -> "IpAddress":
Value-> nil,
Key -> "InsuranceApplicationId":
Value-> 6314,
Key -> "PolicyNumber":
Value-> "SL10032268",
Key -> "PolicyPaymentId":
Value-> 2188,
Key -> "PaymentMethod":
Value-> "GIRO",
"PolicyPayment": "Hash"
Key -> "InsuranceProductDiscountDetail":
Value-> nil,
"Results": "Array"
Key -> "Id":
Value-> 8113,
Key -> "InsuranceProductId":
Value-> 382,
Key -> "ApplicationProductSelectedId":
Value-> 62043,
Key -> "InsuranceProductId":
Value-> 382,
Key -> "ProductCode":
Value-> "TL70-90",
"InsuranceProduct": "Hash"
"ProductDetails": "Hash"
```
**However**, I assume this is what you want:
```
def whatsInside(hashOrArray)
hashOrArray.each(recursive=true) do |key, value|
if value.is_a?(Array)
puts "Entering array #{key}"
value.each { |valueInArray| whatsInside(valueInArray) }
else
puts "#{key} => #{value}" unless value.is_a?(Hash)
end
end
end
whatsInside(a)
```
Will return this:
```
Entering array [:PageOfResults]
[:CurrencyCode] => SGD
[:IpAddress] =>
[:InsuranceApplicationId] => 6314
[:PolicyNumber] => SL10032268
[:PolicyPayment, :PolicyPaymentId] => 2188
[:PolicyPayment, :PaymentMethod] => GIRO
[:InsuranceProductDiscountDetail] =>
Entering array [:ProductDetails, :Results]
[:Id] => 8113
[:InsuranceProductId] => 382
[:ApplicationProductSelectedId] => 62043
[:InsuranceProduct, :InsuranceProductId] => 382
[:InsuranceProduct, :ProductCode] => TL70-90
```
Upvotes: 1 <issue_comment>username_3: The simple **`λ`** would do the trick:
```
printer = ->(enum) do
enum.each do |k, v|
enum = [k, v].detect(&Enumerable.method(:===))
if enum.nil?
puts("Key -> #{k}\nValue -> #{v}")
else
puts ("#{k} -> 'Array'") if v.is_a?(Array)
printer.(enum)
end
end
end
printer.(a)
```
Producing:
```
PageOfResults -> 'Array'
Key -> CurrencyCode
Value -> SGD
Key -> IpAddress
Value ->
Key -> InsuranceApplicationId
Value -> 6314
Key -> PolicyNumber
Value -> SL10032268
Key -> PolicyPaymentId
Value -> 2188
Key -> PaymentMethod
Value -> GIRO
Key -> InsuranceProductDiscountDetail
Value ->
Results -> 'Array'
Key -> Id
Value -> 8113
Key -> InsuranceProductId
Value -> 382
Key -> ApplicationProductSelectedId
Value -> 62043
Key -> InsuranceProductId
Value -> 382
Key -> ProductCode
Value -> TL70-90
```
Upvotes: 4 [selected_answer]<issue_comment>username_4: ```
@pos = 0
@inc = 2
def indent() @pos += @inc end
def undent() @pos -= @inc end
def prs(str); print ' '*@pos; puts str; end
def pr(k,v); prs "key->#{k}"; prs "value->#{v}"; end
```
```
def print_values(h)
h.each do |k,v|
case v
when Array
pr(k, "Array")
indent
v.each do |h|
prs "Hash"
indent
print_values(h)
undent
end
undent
when Hash
pr(k, "Hash")
indent
print_values(v)
undent
else
pr(k,v)
end
end
end
```
```
print_values a
key->PageOfResults
value->Array
Hash
key->CurrencyCode
value->SGD
key->IpAddress
value->
key->InsuranceApplicationId
value->6314
key->PolicyNumber
value->SL10032268
key->PolicyPayment
value->Hash
key->PolicyPaymentId
value->2188
key->PaymentMethod
value->GIRO
key->InsuranceProductDiscountDetail
value->
key->ProductDetails
value->Hash
```
```
key->Results
value->Array
Hash
key->Id
value->8113
key->InsuranceProductId
value->382
key->ApplicationProductSelectedId
value->62043
Hash
key->InsuranceProduct
value->Hash
key->InsuranceProductId
value->382
key->ProductCode
value->TL70-90
```
Upvotes: 1
|
2018/03/16
| 382 | 1,537 |
<issue_start>username_0: We are considering to use Yocto Build system for creating our own distro for our new project.
We have one query: How does OS Update works in Yocto Build System.
Suppose if I want to upgrade the following:
1. Bootloader ( u-boot )
2. Kernel Image ( zImage/bzImage )
3. Adding files in the root file system ( can be some scripts or executables )
4. Adding/Upgrading Packages ( E.g. Updating dropbear to the latest version )
How can we achieve this requirement . What are the things should we develop to achieve this feature.
Whether the updates are monolithic i.e., do we have to reflash everything like buildroot or partial updates only add the necessary things..
I see many suggest swupdate layer to include for this and there are many such tools I believe. Can you suggest which is the best one to use.
Thanks for your time and response<issue_comment>username_1: Which of the existing solutions is "the best" depends on your requirements, so this is something that you will need to decide yourself.
There is an overview page of solutions that work in combination with Yocto. It has links to further information:
<https://wiki.yoctoproject.org/wiki/System_Update>
Upvotes: 3 [selected_answer]<issue_comment>username_2: Yeah you have two solutions for the system update for yocto
1. swupdate
2. rauc update
As you see on most of the blogs and manuals, only disadvantage of rauc is it only update the kernel, FS and not the update uboot So it depends on your application which one you needed
Upvotes: 0
|
2018/03/16
| 358 | 1,532 |
<issue_start>username_0: Suppose you are creating a service with the often-used example of `department` and `employee` resources. The endpoints for retrieving these might look like
```
/api/departments
/api/employees?department=21
```
Now suppose an iPhone client app wanted to use this service to show a big table of employees and the departments they belong to. It seems like a developer would first need to make a request to get all the departments, then for each department found, make a request to get the employees for that department. Finally on the client-side, the data would need to be stitched together and the joined result presented to the user.
Is there a different REST principle to follow here that would lead to fewer requests needing to be made? Should one create an `/api/employessByDepartment` endpoint that returnes everything in one shot?<issue_comment>username_1: Which of the existing solutions is "the best" depends on your requirements, so this is something that you will need to decide yourself.
There is an overview page of solutions that work in combination with Yocto. It has links to further information:
<https://wiki.yoctoproject.org/wiki/System_Update>
Upvotes: 3 [selected_answer]<issue_comment>username_2: Yeah you have two solutions for the system update for yocto
1. swupdate
2. rauc update
As you see on most of the blogs and manuals, only disadvantage of rauc is it only update the kernel, FS and not the update uboot So it depends on your application which one you needed
Upvotes: 0
|
2018/03/16
| 262 | 1,099 |
<issue_start>username_0: I Have created a stored procedure to import a text file in sql database with a specific path, But my problem is the file that I am going to import in keep changing regularly, So I have to change the file name in procedure to import newly added file. I want a procedure to import new file without changing file name in the mention path.
Can you help me to overcome from this problem??
Thank You in advance.<issue_comment>username_1: Which of the existing solutions is "the best" depends on your requirements, so this is something that you will need to decide yourself.
There is an overview page of solutions that work in combination with Yocto. It has links to further information:
<https://wiki.yoctoproject.org/wiki/System_Update>
Upvotes: 3 [selected_answer]<issue_comment>username_2: Yeah you have two solutions for the system update for yocto
1. swupdate
2. rauc update
As you see on most of the blogs and manuals, only disadvantage of rauc is it only update the kernel, FS and not the update uboot So it depends on your application which one you needed
Upvotes: 0
|