date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/14
586
2,047
<issue_start>username_0: I would like to set the text of the label. Unfortunately the number 36 will change on each page refresh. ``` TEXT HERE ``` I can get the ID by using: ``` var id = document.getElementsByName('Incident.CustomFields.c.other_action_taken')[0].getAttribute('id'); ``` How can I then use this to set the label text i.e. target the label by the value of the `for` attribute in JavaScript - not jQuery So as mentioned, the number will change each time so I can't use the ID value of `rn_TextInput_36_Label` that the label currently has<issue_comment>username_1: Your element has the class `rn_Label`, so you can select by that. ``` var el = documment.querySelector(".rn_Label"); ``` If you need to get more specific, you can include the tag name and part of the `for` attribute. ``` var el = documment.querySelector("label.rn_Label[for^=rn_TextInput_][for$=_Incident.CustomFields.c.other_action_taken]"); ``` So this last selector selects the first element that: * has tag name `label` * has class name `rn_Label` * has a `for` attribute that starts with `rn_TextInput_` * has a `for` attribute that ends with `_Incident.CustomFields.c.other_action_taken` And of course you can use `querySelectorAll` to select all elements on the page that meet that criteria. Upvotes: 0 <issue_comment>username_2: ``` var id = document.getElementsByName('Incident.CustomFields.c.other_action_taken')[0].getAttribute('id'); document.querySelector("label[for='"+id+"']").innerText = 'new text'; ``` It gets the ID and then uses querySelector to get the related label and sets the innertext Upvotes: -1 [selected_answer]<issue_comment>username_3: you can work same as this code: ``` var input = document.getElementsByName('Incident.CustomFields.c.other_action_taken')[0]; if(input.labels.length > 0) { var labelID = input.labels[0].id; document.getElementById(labelID ).innerHTML = 'New Text for this lable'; } else { alert('there is no label for this input'); } ``` <https://jsfiddle.net/SETha/75/> Upvotes: 0
2018/03/14
880
3,594
<issue_start>username_0: First, new to creating Bots so please be patient. I created a KB using QnAMaker. Created new Bot in Azure Bot Service. New bot was created using Template for QnA. Followed the steps below per documentation: > > In Azure portal, open the newly created Web App Bot resource. > Click on "Application Settings" and in the App Settings list, add QnASubscriptionKey and QnAKnowledgebaseId. The corresponding values can be obtained from the KB Settings page in <http://qnamaker.ai>. > The QnAMaker-enabled Azure bot service app is now ready to use. To try it out click on "Test in Web Chat" to chat with your QnA bot. > > > **Test in Web Chat does not respond** Also created new App, using Basic template. Made to other updates. Test in We Chat does send a response. Again, new to the process but have read a great deal of documentation but nothing that speaks to this issue specifically. Any help would be greatly appreciated.<issue_comment>username_1: I created a QnAMaker bot this weekend with Bot Service. The documentation is a little confusing at the moment, although Microsoft generally refines it over time until it's quite good. Here's what I did to get this going. 1. Provisioned a QnAMaker service at qnamaker.ai. I created a knowledge base, saved and retrained, and published. To make sure everything is good on the QnAMaker service, go to the Test tab (<https://qnamaker.ai/Edit/Test?kbId=:your-service-id> to make sure you can chat with it and it responds as expected. 2. Created a new Web App bot by going to the portal, clicking "Create a resource", choosing "AI + Cognitive Services", then "Web App Bot". 3. When entering the Web App Bot settings, I made sure to choose a Basic C# bot, and chose the "Question and Answer". 4. Once you provision the Web App Bot service, you'll also have a Web App provisioned as well. You'll need to create a web application that will answer requests from the web, hand them to your QnAMaker service, and return the results. Navigate to your Web App Bot service, then choose the Build menu option under Bot Management. Then Download the zip file containing your starter code. 5. Open the starter code. You'll need to add some keys to your web.config file. Make sure that you have keys for the following, and that they're populated: MicrosoftAppId, MicrosoftAppPassword, QnaSubscriptionKey, QnAKnowledgebaseId, and AzureWebJobsStorage. If memory serves, these values are read within the code, but there's no empty stubs in the web.config that prompts you to enter them. This was a little frustrating. 6. After updating web.config, publish the web app to your Azure Web App instance associated with your bot. 7. Now go back to your Web App Bot in the portal. Under Bot Management, go to the Settings page. You're going to need to enter in the Messaging endpoint so that your bot service knows where to send HTTP requests to your web app, which will in turn talk to your QnAMaker service. In this example project, your messaging endpoint should be <https://[web> app name].azurewebsites.net/api/messages. NOW you're ready to Test in Web Chat. Everything should link up then. Upvotes: 1 <issue_comment>username_2: I had this issue just now. It was caused by having extraneous data at the end of my QNA service key, something like (format=json) which somehow ended up after the key. I suggest you re-copy and paste the knowledgebase id and key into the fields and make sure they are the correct length with no garbage. Apart from not returning responses it gave no other clue as to what might be wrong. Upvotes: 0
2018/03/14
1,516
5,319
<issue_start>username_0: I've installed Yii2 advanced app, I customized the signup according to my user database. After signing up I tried to log in and it says `"Incorrect username or password"`, my password is <PASSWORD> also I've checked it many times and it still does not work. [![User Table](https://i.stack.imgur.com/XtgvF.png)](https://i.stack.imgur.com/XtgvF.png) [![As you can see here, the user information after signing up it records the data to the user table.](https://i.stack.imgur.com/2zmIR.png)](https://i.stack.imgur.com/2zmIR.png) The signup model ``` class SignupForm extends Model { public $username; public $email; public $password; public $first_name; public $middle_name; public $last_name; public $contact; public $birth_date; public $type; public $external_type; public $status; public $region_id; public $barangay_id; public $province_id; public $city_municipal_id; /** * {@inheritdoc} */ public function rules() { return [ ['username', 'trim'], ['username', 'required'], ['username', 'unique', 'targetClass' => '\common\models\User', 'message' => 'This username has already been taken.'], ['username', 'string', 'min' => 2, 'max' => 255], ['email', 'trim'], ['email', 'required'], ['email', 'email'], ['email', 'string', 'max' => 255], ['email', 'unique', 'targetClass' => '\common\models\User', 'message' => 'This email address has already been taken.'], ['password', 'required'], ['password', 'string', 'min' => 6], ['first_name', 'required'], ['first_name', 'string', 'max' => 45], ['middle_name', 'string', 'max' => 45], ['last_name', 'required'], ['last_name', 'string', 'max' => 45], ['contact', 'required'], ['contact', 'string', 'max' => 11], ['birth_date', 'required'], ['type', 'required'], ['type', 'string', 'max' => 45], ['external_type', 'string', 'max' => 45], ['status', 'string', 'max' => 45], ['region_id', 'required'], ['barangay_id', 'required'], ['province_id', 'required'], ['city_municipal_id', 'required'], ]; } /** * Signs user up. * * @return User|null the saved model or null if saving fails */ public function signup() { if (!$this->validate()) { return null; } $user = new User(); $user->username = $this->username; $user->email = $this->email; $user->setPassword($<PASSWORD>); $user->generateAuthKey(); $user->first_name = $this->first_name; $user->middle_name = $this->middle_name; $user->last_name = $this->last_name; $user->contact = $this->contact; $user->birth_date = $this->birth_date; $user->type = $this->type; $user->external_type = $this->external_type; $user->status = $this->status; $user->region_id = $this->region_id; $user->barangay_id = $this->barangay_id; $user->province_id = $this->province_id; $user->city_municipal_id = $this->city_municipal_id; return $user->save() ? $user : null; } } ``` Where did I get wrong here? It is confusing I think that there's no wrong in my code because I followed the proper installation setup of Yii2 Advanced App. Login Model ``` public function login() { if ($this->validate()) { return Yii::$app->user->login($this->getUser(), $this->rememberMe ? 3600 * 24 * 30 : 0); } return false; } ```<issue_comment>username_1: Although you have'nt added the action for the registration or signup, but the point where you are calling the `$model->signup()` function from the model you must check it inside the `if` statement and then add a call to `\Yii::$app->user->login($userModel);` inside, it will log you in after signup. Your `signup()` function returns the user model object after inserting the user in the table. See below code sample ``` if(($userModel=$model->signUp())!==null){ \Yii::$app->user->login($userModel); } ``` Hope it helps you out Upvotes: 0 <issue_comment>username_2: Have you tried setting your user model class to implement the identity interface? First declare your class like so ``` class MyUser extends ActiveRecord implements IdentityInterface ``` now somewhere in your class you need to implement these methods ``` public static function findIdentity($id) { return self::findOne(['id'=>$id]); } public static function findIdentityByAccessToken($token, $type = null) { throw new NotSupportedException("Validation method not supported as of yet."); } public function getId() { return $this->id; } public function getAuthKey() { return $this->auth_key; } public function validateAuthKey($authKey) { return $this->auth_key === $authKey; } ``` next go to the components section of you /config/web.php and search for the `'user'` component ``` 'user' => [ 'identityClass' => 'app\models\MyUser', //<---Your model class 'enableAutoLogin' => true, ], ``` I think with that your good to go. Let me know if this doesn't work. Upvotes: 0 <issue_comment>username_3: check your user table in phpmyadmin, look at the status column, if this value not equal 10, you can change with 10. Upvotes: 2
2018/03/14
785
2,971
<issue_start>username_0: I would like to display both of my buttons next to each other horizontally. So far I can only display them from top to bottom. With the following code, what would I have to change ? ``` new Container( child: new Column( children: [ new RaisedButton( child: new Text("LogIn"), color: Colors.blueAccent[600], onPressed: null, ), new RaisedButton( child: new Text("SignUp"), color: Colors.blueAccent[600], onPressed: null, ), ], ), ), ```<issue_comment>username_1: `Column` is for items vertically arranged (hence a column), you are looking for `Row`. Just replace `Column` with `Row`, the rest of the code is fine. You can also use an `Expanded` if you want to fill all the available space. Upvotes: 6 [selected_answer]<issue_comment>username_2: Do something like this ``` new Column(children: [ new Button( ... ... ), new Button( ... ... ) ]) ``` Upvotes: 1 <issue_comment>username_3: If you have a column with text in it, and you want two buttons below that text right next to eachother you can use ButtonTheme.bar Below is some of Flutter's starting code with it. You could plug it in to the starter to see it in action: Just paste this after the second new text (the one with $counter in it) ``` new ButtonTheme.bar( child: new ButtonBar( alignment: MainAxisAlignment.center, children: [ new RaisedButton( onPressed: \_incrementCounter, child: new Icon(Icons.add), color: Colors.green, ), new RaisedButton( onPressed: \_decrementCounter, child: new Icon(Icons.remove), color: Colors.red, ), ], ), ), ``` Upvotes: 2 <issue_comment>username_4: ``` Wrap( children: [ RaisedButton( ... ), RaisedButton( ... ), RaisedButton( ... ), ], ), ``` Upvotes: 4 <issue_comment>username_5: Just replace Column with that Row in your code It will work. Although I want to add some extra stuff here suppose that you have Column wrapped inside a row then how will you achieve that displaying of buttons side by side? Simple just change Column to row n row to column as follows: ``` child: Column( children: [ Row( mainAxisAlignment: MainAxisAlignment.spaceBetween, child: ... ), child: .. ), child: .. ) ], ), ``` Upvotes: -1 <issue_comment>username_6: ``` child: Column( mainAxisAlignment: MainAxisAlignment.center, children: [ Text( "username_6", ), Text( "Varanasi, India", ), Row( mainAxisAlignment: MainAxisAlignment.center, children: [ FlatButton( onPressed: () { var player = AudioCache(); player.play('abc.mp3'); }, child: Text('Play')), FlatButton( onPressed: () { var player = AudioCache(); player.play('abc.mp3'); }, child: Text('Pause')), ], ) ], ), ``` Upvotes: 2
2018/03/14
1,462
5,307
<issue_start>username_0: I wan't to have multiple AdminSite on my Django project and I don't want to give every user the superuser role just to see and edit the models of the application. Here is the layout of my project: ``` > djangoApp > djangoApp - settings.py - etc... > AAA - admin.py - urls.py - etc.. > BBB - admin.py - urls.py - etc.. > CCC - admin.py - urls.py - etc.. > ressources - models.py - etc.. > core - admin.py - auth.py - models.py - views.py ``` Here is the admin.py of BBB (I use a different database for each AdminSite): ``` from django.contrib import admin from django.contrib.admin import AdminSite, site from core.admin import UserAuthenticationForm from ressources.models import Adresse class BBBAdminSite(AdminSite): site_header = 'BBB admin' login_form = UserAuthenticationForm login_template = "core/login.html" def has_permission(self,request): user = request.user return user.is_active and user.is_staff and (user.account_id == 100 or user.account_id == 0) class AdminModel(admin.ModelAdmin): using = 'DATABASE_NAME' def has_add_permission(self, request): user = request.user return user.is_active and user.is_staff and (user.account_id == 100 or user.account_id == 0) def has_change_permission(self, request, obj=None): user = request.user return user.is_active and user.is_staff and (user.account_id == 100 or user.account_id == 0) def has_delete_permission(self, request, obj=None): user = request.user return user.is_active and user.is_staff and (user.account_id == 100 or user.account_id == 0) def save_model(self, request, obj, form, change): user = request.user obj.account_id = user.account_id obj.save(using=self.using) def delete_model(self, request, obj): obj.delete(using=self.using) def get_queryset(self, request): return super(AdminModel, self).get_queryset(request).using(self.using) def formfield_for_foreignkey(self, db_field, request=None, **kwargs): return super(AdminModel, self).formfield_for_foreignkey(db_field, request=request, using=self.using, **kwargs) def formfield_for_manytomany(self, db_field, request=None, **kwargs): return super(AdminModel, self).formfield_for_manytomany(db_field, request=request, using=self.using, **kwargs) bbbAdmin = BBBAdminSite(name='bbbAdmin') bbbAdmin.register(Adresse, AdminModel) ### The user can see this in the admin dashboard only if he is superuser ``` Here is the urls.py of the same app: ``` from django.contrib import admin from django.urls import include, path, re_path from BBB.admin import bbbAdmin urlpatterns = [ path('', bbbAdmin.urls), ] ``` I use a different AuthBackend to authenticate my users: core/auth.py: ``` from django.conf import settings from django.contrib.auth.hashers import check_password from .models import User class AuthBackend(object): def has_perm(self, user_obj, perm, obj=None): if(obj != None): return (obj.account_id == user_obj.account_id) return False def get_user(self, user_id): try: return User.objects.get(pk=user_id) except User.DoesNotExist: return None def authenticate(username=None, password=<PASSWORD>, account_id=None): try: user = User.objects.get(username=username, account_id=account_id) except User.DoesNotExist: return None pwd_valid = check_password(password, user.password) if pwd_valid: return user else: return None ``` I have registered the AuthBackend in my settings.py: ``` AUTHENTICATION_BACKENDS = ('core.auth.AuthBackend',) ``` I tried to change it to: ``` AUTHENTICATION_BACKENDS = ('core.auth.AuthBackend', 'django.contrib.auth.backends.ModelBackend',) ``` But it gives me the error > > You have multiple authentication backends configured and therefore must provide the `backend` argument or set the `backend` attribute on the user. > > > I have tried to put the user in a group which had all the permissions, but it still displays "you don't have permission to edit anything" when he logins. The user has is\_staff set to True<issue_comment>username_1: I have found the solution. I forgot to add the method "has\_module\_permission" to my AdminModel BBB/admin.py ``` class AdminModel(Admin.ModelAdmin): [...] def has_module_permission(self,request): return True [...] ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: For me, I created a custom admin site and made the mistake of initializing it in multiple places, to resolve this, initialize your admin site only once in a variable like so: ``` custom_admin_site = CustomAdminSite() ``` preferably in the same file, it was declared, Then import the variable holding that instance anywhere it needs to be used like in the urls.py for registering the `custom_admin_site.urls` and in for registering your models across apps. Initializing it somewhere else is creating a whole new admin site and not the same one you initialized earlier, hence the no permission error. Upvotes: 0
2018/03/14
468
1,616
<issue_start>username_0: After upgrading my rest service from Spring Boot 1.5.10 to 2.0.0 I encountered my tests failing which passed before. Following Scenario: ``` import org.mockito.internal.matchers.Null; import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.jsonPath; ... .andExpect(jsonPath("img").value(Null.NULL)) ``` Fails now in Spring MVC 5 with following message: > > java.lang.AssertionError: JSON path "img" > > Expected :isNull() > Actual :null > > > What is the correct way in Spring MVC 5 to assert that the value of the `jsonPath` is `null`?<issue_comment>username_1: Answering my own question as I found the solution by myself. You have to use the correct Matcher, in my case `org.hamcrest.core.IsNull` So I had to change to ``` import org.hamcrest.core.IsNull; ... andExpect(jsonPath("img").value(IsNull.nullValue())) ``` Upvotes: 6 [selected_answer]<issue_comment>username_2: You can use content().srtring(Matcher matcher) and then use IsEmptyString matcher ``` result.andDo(print()) .andExpect(status().isNoContent()) .andExpect(content().string(IsEmptyString.isEmptyOrNullString())); ``` Upvotes: 1 <issue_comment>username_3: **April 2022**, **Hamcrest 2.2** ================================ [nullValue()](http://hamcrest.org/JavaHamcrest/javadoc/2.2/org/hamcrest/CoreMatchers.html#nullValue--) is a standalone static method importable by `org.hamcrest.CoreMatchers.nullValue`. So, updated solution resolves to ```java static import org.hamcrest.core.nullValue; ... andExpect(jsonPath("img").value(nullValue())) ``` Upvotes: 2
2018/03/14
2,779
9,346
<issue_start>username_0: I am trying to add Google Analytics to a React Web Application. I know how to do it in HTML/CSS/JS sites and I have integrated it in an AngularJS app too. But, I'm not quite sure how to go about it when it comes to react. With HTML/CSS/JS, I had just added it to every single page. What I had done with AngularJS was adding GTM and GA script to index.html and added UA-labels to the HTML divs (and buttons) to get clicks. **How can I do that with React?** Please help!<issue_comment>username_1: **Update: August 2023** The old Package `react-ga` is now archived since it doesn't support Google Analytics version 4. There's a new package named [`react-ga4`](https://www.npmjs.com/package/react-ga4). Add it by running: `npm i react-ga4` **Initialization** ```js import ReactGA from "react-ga4"; ReactGA.initialize("your GA measurement id"); ``` **To report page view:** ```js ReactGA.send({ hitType: "pageview", page: "/my-path", title: "Custom Title" }); ``` **To report custom event:** ```js ReactGA.event({ category: "your category", action: "your action", label: "your label", // optional value: 99, // optional, must be a number nonInteraction: true, // optional, true/false transport: "xhr", // optional, beacon/xhr/image }); ``` **Update: Feb 2019** As I saw that this question is being searched a lot, I decided to expand my explanation. To add Google Analytics to React, I recommend using React-GA. Add by running: `npm install react-ga --save` **Initialization:** In a root component, initialize by running: ``` import ReactGA from 'react-ga'; ReactGA.initialize('Your Unique ID'); ``` **To report page view:** ``` ReactGA.pageview(window.location.pathname + window.location.search); ``` **To report custom event:** ``` ReactGA.event({ category: 'User', action: 'Sent message' }); ``` More instructions can be found in the [github repo](https://github.com/react-ga/react-ga) --- The best practice for this IMO is using react-ga. Have a look at the [github rep](https://github.com/react-ga/react-ga) Upvotes: 8 [selected_answer]<issue_comment>username_2: One other great library that you can check is [redux-beacon](https://github.com/rangle/redux-beacon). It gets integrated very easily with react/redux application and has a great documentation for it. ReactGA is good too but with redux-beacon, you won't clutter your app code with google analytics code as it works via its own middleware. Upvotes: 2 <issue_comment>username_3: Without using a package this is how I would do it: In your `index.js` (in the `render` method): ``` {/* Global site tag (gtag.js) - Google Analytics */} {injectGA()} ``` And outside the class: ``` const injectGA = () => { if (typeof window == 'undefined') { return; } window.dataLayer = window.dataLayer || []; function gtag() { window.dataLayer.push(arguments); } gtag('js', new Date()); gtag('config', 'YOUR_TRACKING_ID'); }; ``` Upvotes: 5 <issue_comment>username_4: I suggest embedding the [Segment script](https://github.com/segmentio/analytics-react) into your `index.html`, use the analytics library that is accessible on the `window` object, and add tracking calls onto React’s event handlers: ``` export default class SignupButton extends Component { trackEvent() { window.analytics.track('User Signup'); } render() { return ( Signup with Segment today! ); } } ``` I’m the maintainer of <https://github.com/segmentio/analytics-react>. I recommend checking it out if you want to solve this problem by using one singular API to manage your customer data, and be able to integrate into any other analytics tool (we support over 250+ destinations) without writing any additional code. Upvotes: 0 <issue_comment>username_5: If you prefer not to use a package this is how it can work in a react application. Add the "gtag" in index.html ```html window.dataLayer = window.dataLayer || []; function gtag() { dataLayer.push(arguments); } gtag("js", new Date()); gtag("config", "<GA-PROPERTYID>"); ``` In the submit action of the login form, fire off the event ``` window.gtag("event", "login", { event_category: "access", event_label: "login" }); ``` Upvotes: 5 <issue_comment>username_6: Looking at google's site <https://developers.google.com/analytics/devguides/collection/analyticsjs>, you could also add Google Analytics using this function: ```js const enableGA = () => { !function(A,n,g,u,l,a,r){A.GoogleAnalyticsObject=l,A[l]=A[l]||function(){ (A[l].q=A[l].q||[]).push(arguments)},A[l].l=+new Date,a=n.createElement(g), r=n.getElementsByTagName(g)[0],a.src=u,r.parentNode.insertBefore(a,r) }(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-XXXXXXXX-X'); ga('send', 'pageview'); } ``` This way you don't need an external library, and it's pretty quick to setup. Upvotes: 0 <issue_comment>username_7: **Escape the analytics code with `dangerouslySetInnerHTML`** First you have of course to share the header code to all pages, e.g. as asked at: [React js do common header](https://stackoverflow.com/questions/38510111/react-js-do-common-header) Then, this Next.js answer <https://stackoverflow.com/a/24588369/895245> gives a good working code that should also work outside of Next.js. It escapes the analytics code with `dangerouslySetInnerHTML`: ``` ``` where you should replace `UA-47867706-3` with your own code. This code is exactly the code that Google gives, but with the following modification: we added the: ``` { page_path: window.location.pathname } ``` to `gtag('config'` for it to be able to get the visited path, since this is a JavaScript SPA. This generates the desired output on the browser: ``` window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-47867706-3', { page\_path: window.location.pathname }); ``` The only other divergence from the exact code given by Google is the `async=""` vs `async`, but both of those are equivalent in HTML since it is a boolean attribute, see also: [What's the proper value for a checked attribute of an HTML checkbox?](https://stackoverflow.com/questions/7851868/whats-the-proper-value-for-a-checked-attribute-of-an-html-checkbox/24588369#24588369) Escaping with `dangerouslySetInnerHTML` is necessary because otherwise React interprets the code inside `script` a JSX and that fails with: ``` Syntax error: Unexpected token, expected "}" 21 | ``` Upvotes: 2 <issue_comment>username_8: There are 2 types of Google Analytics properties: Universal Analytics (`UA-xxxxxxxxx-x`) which is [deprecated with the end of life on 2023.07.01](https://support.google.com/analytics/answer/11583528?hl=en) and Google Analytics 4 property (`G-xxxxxxxxxx`) which is the replacement. [`react-ga`](https://github.com/react-ga/react-ga) was popular for Universal Analytics but [the maintainer doesn't plan to update it](https://github.com/react-ga/react-ga/issues/460#issuecomment-820369771) (related issues: [1](https://github.com/react-ga/react-ga/issues/460), [2](https://github.com/react-ga/react-ga/issues/493), [3](https://github.com/react-ga/react-ga/issues/520)) and it had maintenance issues ([1](https://github.com/react-ga/react-ga/issues/523)). [`react-ga4`](https://github.com/PriceRunner/react-ga4) and [`ga-4-react`](https://github.com/unrealmanu/ga-4-react) popped up as replacements but since these are similar wrappers you're at the mercy of the maintainers to implement and support all functionality. The simplest way to get started is to follow [Google's guide](https://developers.google.com/tag-platform/gtagjs/install): include `gtag` on the page and use it as `window.gtag`. This method works for both old and new tags and there's even TypeScript support via `@types/gtag.js`. The script can be loaded async [as recommended](https://developers.google.com/analytics/devguides/collection/analyticsjs#alternative_async_tag). `index.html` ```html window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-xxxxxxxxxx') ``` Keep in mind that Google Analytics does automatic page tracking, but this will not work for every use case. For example, `hash` and `search` parameter changes are [not tracked](https://stackoverflow.com/a/73362255/2771889). This can lead to a lot of confusion. For example, when using [`HashRouter`](https://reactrouter.com/en/main/router-components/hash-router) or anchor links the navigation will not be tracked. To have full control over page view tracking you can [disable automatic tracking](https://developers.google.com/analytics/devguides/collection/gtagjs/pages#disable_pageview_measurement). See for a detailed explanation: [The Ultimate Guide to Google Analytics (UA & GA4) on React (Or Anything Else](https://csaba-apagyi.medium.com/the-ultimate-guide-to-google-analytics-ua-ga4-on-react-or-anything-else-ec9dc84c0211) Manual page tracking: <https://stackoverflow.com/a/63249329/2771889> You can see this working in [cra-typescript-starter](https://github.com/username_8/cra-typescript-starter) where I'm also setting the tag from an env var. Upvotes: 3
2018/03/14
590
1,842
<issue_start>username_0: I'm using ag-Grid on my application and I have it ruining fine with the default theme (ag-theme-balham). On one specific component I want to change the header background color, but when I add the CSS on my component.scss file nothing happens. I added the ag-Grid css on my angular-cli.json file ``` "styles": [ "../node_modules/font-awesome/scss/font-awesome.scss", "../node_modules/ag-grid/dist/styles/ag-grid.css", "../node_modules/ag-grid/dist/styles/ag-theme-balham.css", "styles.scss" ], ``` On component.scss file I have the following CSS ``` .ag-theme-balham .ag-header { background-color: #e0e0e0; } ``` But nothing happens, and the color does not get applied to the header.<issue_comment>username_1: Try using ::ng-deep combinator <https://angular.io/guide/component-styles#deprecated-deep--and-ng-deep> ``` ::ng-deep .ag-theme-balham .ag-header { background-color: #e0e0e0; } ``` If that does not work, put your css in the global stylesheet and check if the styles are overriden correctly Upvotes: 3 <issue_comment>username_2: Override the header-cell class instead ``` .ag-theme-balham .ag-header-cell{ background-color: #e0e0e0; } ``` and if you have header-group then ``` .ag-theme-balham .ag-header-cell, .ag-theme-balham .ag-header-group-cell{ background-color: #e0e0e0; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: A bit of an old question this is but to anyone coming into it now, this solution worked for me. The tags you need to write in are : ``` ::ng-deep .ag-theme-balham .ag-header-cell-label{ /* your css here*/} ``` also the tag `!important` could also work Upvotes: 1 <issue_comment>username_4: Override `background-color` with `!important`. That is: ``` :host { ::ng-deep { background-color: } ``` Upvotes: 0
2018/03/14
3,764
10,804
<issue_start>username_0: I have nested query where in i am filtering the present days data and later aggregating the data using date-histogram aggregation with hourly interval, but in date-histogram output it is returning the previous day's data also. is filter is not working? Here is my Query: ``` POST finalalertbrowser/_search?size=0 { "query": { "bool": { "must": [{ "match_phrase": { "projectId.keyword": "******************************88" } }], "filter": { "nested": { "path": "errors", "query": { "bool": { "filter": { "range": { "errors.time": { "gte": "now/d", "lte": "now" } } } } } } } } }, "aggs": { "errorData": { "nested": { "path": "errors" }, "aggs": { "errorMsg": { "filter": { "term": { "errors.errMsg.keyword": "Uncaught TypeError: $.snapUpdate is not a function" } }, "aggs": { "hourlyData": { "date_histogram": { "field": "errors.time", "interval": "hour", "time_zone": "+05:30" } } } } } } } } ``` and the output of the query is: ``` "aggregations": { "errorData": { "doc_count": 89644, "errorMsg": { "doc_count": 1861, "hourlyData": { "buckets": [ { "key_as_string": "2018-03-13T11:00:00.000+05:30", "key": 1520919000000, "doc_count": 3 }, { "key_as_string": "2018-03-13T12:00:00.000+05:30", "key": 1520922600000, "doc_count": 2 }, { "key_as_string": "2018-03-13T13:00:00.000+05:30", "key": 1520926200000, "doc_count": 2 }, { "key_as_string": "2018-03-13T14:00:00.000+05:30", "key": 1520929800000, "doc_count": 2 }, { "key_as_string": "2018-03-13T15:00:00.000+05:30", "key": 1520933400000, "doc_count": 4 }, { "key_as_string": "2018-03-13T16:00:00.000+05:30", "key": 1520937000000, "doc_count": 8 }, { "key_as_string": "2018-03-13T17:00:00.000+05:30", "key": 1520940600000, "doc_count": 6 }, { "key_as_string": "2018-03-13T18:00:00.000+05:30", "key": 1520944200000, "doc_count": 3 }, { "key_as_string": "2018-03-13T19:00:00.000+05:30", "key": 1520947800000, "doc_count": 1 }, { "key_as_string": "2018-03-13T20:00:00.000+05:30", "key": 1520951400000, "doc_count": 2 }, { "key_as_string": "2018-03-13T21:00:00.000+05:30", "key": 1520955000000, "doc_count": 4 }, { "key_as_string": "2018-03-13T22:00:00.000+05:30", "key": 1520958600000, "doc_count": 3 }, { "key_as_string": "2018-03-13T23:00:00.000+05:30", "key": 1520962200000, "doc_count": 2 }, { "key_as_string": "2018-03-14T00:00:00.000+05:30", "key": 1520965800000, "doc_count": 1 }, { "key_as_string": "2018-03-14T01:00:00.000+05:30", "key": 1520969400000, "doc_count": 2 }, { "key_as_string": "2018-03-14T02:00:00.000+05:30", "key": 1520973000000, "doc_count": 1 }, { "key_as_string": "2018-03-14T03:00:00.000+05:30", "key": 1520976600000, "doc_count": 1 }, { "key_as_string": "2018-03-14T04:00:00.000+05:30", "key": 1520980200000, "doc_count": 2 }, { "key_as_string": "2018-03-14T05:00:00.000+05:30", "key": 1520983800000, "doc_count": 2 }, { "key_as_string": "2018-03-14T11:00:00.000+05:30", "key": 1521005400000, "doc_count": 349 }, { "key_as_string": "2018-03-14T12:00:00.000+05:30", "key": 1521009000000, "doc_count": 300 }, { "key_as_string": "2018-03-14T13:00:00.000+05:30", "key": 1521012600000, "doc_count": 258 }, { "key_as_string": "2018-03-14T14:00:00.000+05:30", "key": 1521016200000, "doc_count": 247 }, { "key_as_string": "2018-03-14T15:00:00.000+05:30", "key": 1521019800000, "doc_count": 144 }, { "key_as_string": "2018-03-14T16:00:00.000+05:30", "key": 1521023400000, "doc_count": 63 }, { "key_as_string": "2018-03-14T17:00:00.000+05:30", "key": 1521027000000, "doc_count": 30 } ] } } } } ``` i have executed the query on 14th march 2018 , but query is giving output from 13 March 2018. Below is the mapping command : ``` PUT myIndexName { "mappings": { "webbrowsererror": { "properties": { "errors": { "type": "nested" , "properties": { "time":{"type":"date"} } } } } } } ``` **and below are the sample records in the index:** ``` _source": { "projectId": "******************", "sId": "bt82x3g8v1505001600027", "pId": "bt82x3g8v1505001600027.1", "pageURL": "***************************", "startTime": 1505001600027, "country": "unknown", "size": 2, "errors": [ { "sid": "bt82x3g8v1505001600027", "pid": "bt82x3g8v1505001600027.1", "browser": "Googlebot", "time": 1505001600028, "errMsg": "Uncaught SyntaxError: Invalid regular expression: missing /", "url": "********************************", "lineNo": 161, "colNo": 54 }, { "sid": "bt82x3g8v1505001600027", "pid": "bt82x3g8v1505001600027.1", "browser": "Googlebot", "time": 1505001600058, "errMsg": "Uncaught Error: Syntax error, unrecognized expression: #!", "url": "************************************************************", "lineNo": 3, "colNo": 69 } ] } "_source": { "projectId": "shaan-shaanstack-1-1517388493060", "sId": "bt82x3g8v1502496000027", "pId": "bt82x3g8v1502496000027.1", "startTime": 1502496000027, "country": "US", "size": 1, "errors": [ { "sid": "bt82x3g8v1502496000027", "pid": "bt82x3g8v1502496000027.1", "browser": "Chrome Mobile", "time": 1502496000128, "errMsg": "Uncaught Error: Syntax error, unrecognized expression: #!", "url": "**************************************************", "lineNo": 2, "colNo": 69 } ] } "_source": { "projectId": null, "sId": "888888888888888", "pId": "bt82x3g8v1505001600027.1", "pageURL": "******************", "startTime": 1505001600027, "country": "unknown", "size": 2, "errors": [ { "sid": "bt82x3g8v1505001600027", "pid": "bt82x3g8v1505001600027.1", "browser": "Googlebot", "time": 1505001600028, "errMsg": "Uncaught SyntaxError: Invalid regular expression: missing /", "url": "***********************************", "lineNo": 170, "colNo": 54 }, { "sid": "bt82x3g8v1505001600027", "pid": "bt82x3g8v1505001600027.1", "browser": "Googlebot", "time": 1505001600082, "errMsg": "Uncaught Error: Syntax error, unrecognized expression: #!", "url": "***********************************", "lineNo": 3, "colNo": 69 } ] } ```<issue_comment>username_1: Try using ::ng-deep combinator <https://angular.io/guide/component-styles#deprecated-deep--and-ng-deep> ``` ::ng-deep .ag-theme-balham .ag-header { background-color: #e0e0e0; } ``` If that does not work, put your css in the global stylesheet and check if the styles are overriden correctly Upvotes: 3 <issue_comment>username_2: Override the header-cell class instead ``` .ag-theme-balham .ag-header-cell{ background-color: #e0e0e0; } ``` and if you have header-group then ``` .ag-theme-balham .ag-header-cell, .ag-theme-balham .ag-header-group-cell{ background-color: #e0e0e0; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: A bit of an old question this is but to anyone coming into it now, this solution worked for me. The tags you need to write in are : ``` ::ng-deep .ag-theme-balham .ag-header-cell-label{ /* your css here*/} ``` also the tag `!important` could also work Upvotes: 1 <issue_comment>username_4: Override `background-color` with `!important`. That is: ``` :host { ::ng-deep { background-color: } ``` Upvotes: 0
2018/03/14
507
1,911
<issue_start>username_0: I am trying to build an interactive shell-like terminal program in Python3 for a school project. It should be easily expandable and not rely on non-python-builtin modules. For this matter, I made a module, which is imported and contains something like this: ``` commandDictionary={ "command":''' Information for my program on how to handle command In multiple lines.''', } helpDictionary={ "command":''' Short Text for the help-command to display Also in multiple lines.''' } ``` What I want to do is to list all keys from helpDictionaryin a string form if help is input. The output should look like this: ``` Help List of available commands: command1, command2, command3, command4 #Newline after 4 commands. command5, command6, commandWithALongName, command8 ``` My Problem is, that helpDictionary.keys() returns something like this: ``` ['command1', 'command2'] ``` and I dont want the Brackets nor the ' . Is this possible?<issue_comment>username_1: So, your problem is how to print a list without brackets. There are several solutions. 1. Traverse the keys: `for k in helpDictionary.keys(): print(k)` 2. Or convert the list to a string, then print the mid. `li = list(helpDictionary.keys()) print(str(li)[1:-1])` Upvotes: -1 <issue_comment>username_2: If you don't want to retain the contents in memory, you can print any iterable you want with an arbitrary separator like this: ``` print(*helpDictionary.keys(), sep=', ') ``` If you do want the string for something, use `str.join` on the separator you want: ``` s = ', '.join(helpDictionary.keys()) print(s) ``` Both cases shown above will output the result in essentially arbitrary order because dictionaries use hash tables under the hood. If you want to sort commands lexicographically, replace `helpDictionary.keys()` with `sorted(helpDictionary.keys())`. Upvotes: 3 [selected_answer]
2018/03/14
250
845
<issue_start>username_0: I need disable scroll of ion-content because im using ion-scroll on a custom component. Already tried set no-bounce and ion-fixed on ion-content. I also tried set this css: ``` .content .scroll-content { overflow: hidden; } ``` but it doesnt works. I just need disable the scroll of content and preserve the scroll of custom component. Thanks in advance.<issue_comment>username_1: Add this to your .scss file ``` .no-scroll .scroll-content{ overflow: hidden; } ``` Then add the no-scroll class to your ion-content like this ``` .. ``` Upvotes: 1 <issue_comment>username_2: ``` ion-content { --overflow: hidden; } ``` Upvotes: 2 <issue_comment>username_3: You can do this by two different approaches : In SCSS:- ``` ion-content {--overflow: hidden} ``` OR In Html:- ``` ``` Upvotes: 3
2018/03/14
300
946
<issue_start>username_0: I'm completely new to three.js, I'd like the smoke canvas to on top of the image to have a transparent background. the canvas sits behind the image at the moment, that's why I have this css below. ``` canvas{ position: absolute; z-index: 2; top:0px; left: 0; } ``` I don't understand how to remove or to make the black background of the canvas transparent. Please have a look at my demo below <https://codepen.io/davide77/pen/GxZgZB?editors=1010><issue_comment>username_1: Add this to your .scss file ``` .no-scroll .scroll-content{ overflow: hidden; } ``` Then add the no-scroll class to your ion-content like this ``` .. ``` Upvotes: 1 <issue_comment>username_2: ``` ion-content { --overflow: hidden; } ``` Upvotes: 2 <issue_comment>username_3: You can do this by two different approaches : In SCSS:- ``` ion-content {--overflow: hidden} ``` OR In Html:- ``` ``` Upvotes: 3
2018/03/14
1,500
4,551
<issue_start>username_0: I have a dataframe, where each row corresponds to a string. I want to remove certain words from these strings - here is how I do it using a lambda-function: ``` def remove_words(s): return s.apply(lambda x: [x for x in x if x not in ["name", "is", "m"]]) s = pd.DataFrame({"s":["Hi my name is Joe", "Hi my name is Hannah", "Hi my name is Brian"]}) remove_words(s.s) ``` This doesn't produce the correct result as it removes the `m`'s from all the words: ``` 0 [H, i, , y, , n, a, e, , i, s, , J, o, e] 1 [H, i, , y, , n, a, e, , i, s, , H, a, n, ... 2 [H, i, , y, , n, a, e, , i, s, , B, r, i, ... Name: s, dtype: object ``` The desired result I want is, however, ``` 0 ["Hi <NAME>"] 1 ["Hi my Hannah"] 2 ["Hi my Brian"] Name: s, dtype: object ``` * so it should only remove an `m` in the case where it is a separate letter in the string. Is it possible to do this with lambdas? --- Please see this counterexample when using regex: ``` regex = '|'.join((' %s ' % word for word in ["in", "out", "between"])) test = pd.DataFrame({"s": ["hello in out between inner in"]}) test.s.str.replace(regex, " ") ``` This doesn't weed out all `out` nor `in` ``` 0 hello out inner in Name: s, dtype: object ```<issue_comment>username_1: This probably? ``` def remove_words(s): return s.apply(lambda x: ["".join([x for x in x if x not in ["name", "is"]])]) ``` Upvotes: 1 <issue_comment>username_2: Using `.apply()` and a lambda is a bit inefficient for this case. Pandas [string methods](https://pandas.pydata.org/pandas-docs/stable/text.html) are built specifically for this: ``` >>> s1 = pd.Series(["Hi my name is Joe", "Hi my name is Hannah", "Hi my name isn't Brian"]) >>> words = ['name', 'is'] >>> regex = r' *\b(?:{})\b'.format('|'.join(words)) >>> s.str.replace(regex, '').str.strip() 0 Hi my Joe 1 Hi my Hannah 2 Hi my isn't Brian dtype: object ``` Why not use `.apply()` here? This method is a way of mapping a (native Python) callable to each element of a Pandas object one-by-one. In generally, it can lead to doing more work at a slower pace than necessary. For example, in `["".join([x for x in x if x not in ["name", "is"]])]`, you have (1) a list comprehension, (2) `.split()`, and (3) a join operation for each individual "cell". With your updated case: ``` >>> s2 = pd.Series(["hello in out between inner in"]) >>> words = ['in', 'out', 'between'] >>> regex = r' *\b(?:{})\b'.format('|'.join(words)) >>> s2.str.replace(regex, '').str.strip() 0 hello inner dtype: object ``` The `str.strip()` is built to deal with cases where the result would otherwise be padded with whitespace on either or both sides. Using `.str` methods should be pretty quick even if they are method-chained. Upvotes: 2 <issue_comment>username_3: using regex [re.sub](https://docs.python.org/3.6/library/re.html#re.sub) ``` import re # construct the regex pattern words = ['name', 'is'] pattern = re.compile(r'\b({})\b'.format('|'.join(words))) # apply the function on the series s.s.apply(lambda x: re.sub('\s+', ' ', re.sub(pattern, '', x))) ``` re.sub is used twice, first to remove the words, next to replace extra spaces. outputs: ``` 0 <NAME> Joe 1 <NAME> 2 <NAME> Brian ``` `\b` is the regex pattern for word-boundary. `\b(name|is|a)\b` will match the following, matches indicated by a strike-through ~~a~~ rose by any other ~~name~~ as you can see, even single letter words are properly matched. However, there is 1 more issue that the above solution hasn't addressed yet. A match at the end of the sentence will leave a single space which isn't cleaned up by `re.sub(r'\s+', ' ',...`. so, it may be necessary to `strip` spaces as well. This would depend on your data With that note, the final solution becomes: ``` s.s.apply(lambda x: re.sub('\s+', ' ', re.sub(pattern, '', x)).strip()) ``` Upvotes: 2 <issue_comment>username_4: Your problem is the list comprehension over the string (it will return each character). do a split on the sentence first like: ``` def remove_words(s, stop_words): return s.apply(lambda x: ' '.join(w for w in x.split() if w not in stop_words)) ``` if you want to return the sentence inside a list: ``` def remove_words_return_list(s, stop_words): return s.apply(lambda x: [' '.join(w for w in x.split() if w not in stop_words)]) ``` call it like: ``` remove_words(s, ['name', 'is', 'm']) remove_words_return_list(s, ['name', 'is', 'm']) ``` Upvotes: 1
2018/03/14
1,178
3,788
<issue_start>username_0: Consider this simple makefile: ``` all: output.txt # The actual build command won't be this simple. # It'll more be like "some-compiler file1.txt", # which includes file2.txt automatically. output.txt: file1.txt cat file1.txt file2.txt > output.txt file2.txt: echo "heyo" > file2.txt file1.txt: file2.txt ``` On first run, Make recognizes that `file2.txt` is a dependency of `file1.txt`, and so it needs to be built for `output.txt` to be built. Thus, it runs `echo "heyo" > file2.txt` and then `cat file1.txt file2.txt > output.txt`. However, on subsequent runs, **if `file2.txt` is changed, Make doesn't rebuild**! If `file1.txt` is changed it does, but not for `file2.txt`. It just gives the dreaded `make: Nothing to be done for 'all'.` message. One hacky solution I've seen people suggest is to do the following: ``` all: output.txt output.txt: file1.txt file2.txt cat file1.txt file2.txt > output.txt ``` However, that's not possible in my case, as my secondary dependencies (the lines like `file1.txt: file2.txt`) are dynamically generated using `include`. How do I make sure Make checks for modifications **all the way up the tree** when I have **multiple levels of dependencies**?<issue_comment>username_1: Your makefile does **neither generate nor update** `file1.txt` at all (i.e.: `file1.txt` must exist at the moment of running `make`). It contains *no recipe* for generating `file1.txt` from `file2.txt`. It has just an *empty rule* (i.e.: a *rule without recipe*): ``` file1.txt: file2.txt ``` Since `file1.txt` a prerequisite of `output.txt`, this empty rule just implies that `file2.txt` must exist for `output.txt` to be built, it does not even update `file1.txt` when `file2.txt` is generated. Since `file1.txt` is the only prerequisite of `output.txt` and `file1.txt` is never updated by `make`, once `output.txt` is generated, it remains always up-to-date (provided `file1.txt` is not externally updated). `file2.txt` being changed never causes `output.txt` to be rebuilt because: * it is not a prerequisite of `output.txt`. * it does not update `file1.txt` (which is the only prerequisite of `output.txt`). --- ### Solution Given your current `output.txt` rule: ``` output.txt: file1.txt cat file1.txt file2.txt > output.txt ``` If you want `output.txt` to be built every time `file2.txt` changes, then you need `file1.txt` to be built every time `file2.txt` changes. This can be achieved by means of a rule whose *recipe* actually updates `file1.txt` and has `file2.txt` as prerequisite, e.g.: ``` file1.txt: file2.txt touch $@ ``` Upvotes: 2 <issue_comment>username_2: I think the problem here is that your makefile is slightly *too* simple. Let `a -> b` denote `a depends on b`. From your makefile you have... ``` output.txt -> file1.txt -> file2.txt ``` When `make` tries to update `output.txt` it sees that `output.txt` depends on `file1.txt`. It then notices that `file1.txt` depends on `file2.txt`. At that point the dependency chain stops. If make sees that `file2.txt` is newer than `file1.txt` it will run the command(s) that is associated with the `file1.txt: file2.txt` delendency. In this case, however, there aren't any commands -- just the dependency itself. That's fine as things go, but it does mean that even if `file2.txt` is updated `file1.txt` won't be. Hence, when make moves up the dependency chain to... ``` output.txt: file1.txt ``` it sees that `output.txt` is *still* newer than `file1.txt` so there is no need to run any command associated with that dependency. If you add the `touch` command... ``` file1.txt: file2.txt touch $@ ``` then `file1.txt` *will* be updated and so the dependency chain works as you expect. Upvotes: 2 [selected_answer]
2018/03/14
285
1,096
<issue_start>username_0: [enter image description here](https://i.stack.imgur.com/OwF3H.png) When the webpack is downloaded, the prompt has been installed, but the use of instructions can't be used. Where is the problem?<issue_comment>username_1: Try to install webpack-cli. For that run: `npm install webpack-cli -g` Then you can run `webpack -v` in command line. Upvotes: 1 <issue_comment>username_2: I see it has been a while since asking this question, however, I will share my experience. Installing webpack globally makes it available from terminal. On the opposite, in case you have installed it locally, the webpack command is not available, but you can use npm script. You can find more detailed explanation [here](https://medium.com/a-beginners-guide-for-webpack-2/installing-webpack-b2edf9943509) Hope this is helpful for everyone :) Upvotes: -1 <issue_comment>username_3: Use this command: `npx webpack`. The `npx` command runs binaries that are in your project's `node_modules` folder. Assuming you've already done `npm install webpack` or equivalent, this will work. Upvotes: 2
2018/03/14
1,389
4,135
<issue_start>username_0: I have recently updated ChromeDriver to 2.36 after chrome was auto updated to v65. But on running test now i am getting below exceptions ``` Starting ChromeDriver 2.36.540470 (e522d04694c7ebea4ba8821272dbef4f9b818c91) on port 10482 Only local connections are allowed. Exception in thread "main" org.openqa.selenium.WebDriverException: unknown error: failed to write automation extension zip (Driver info: chromedriver=2.36.540470 (e522d04694c7ebea4ba8821272dbef4f9b818c91),platform=Windows NT 6.1.7601 SP1 x86_64) (WARNING: The server did not provide any stacktrace information) Command duration or timeout: 218 milliseconds Build info: version: '3.11.0', revision: 'e59cfb3', time: '2018-03-11T20:26:55.152Z' System info: host: '172.16.27.222', ip: '172.16.123.114', os.name: 'Windows 7', os.arch: 'amd64', os.version: '6.1', java.version: '1.8.0_77' Driver info: driver.version: ChromeDriver ``` Have tried both chromeOptions & DesiredCapabilties ``` ChromeOptions o = new ChromeOptions(); o.addArguments("disable-extensions"); o.addArguments("--start-maximized"); ChromeDriver chromeDriver = new ChromeDriver(o); return chromeDriver; ``` Also tried below method : ``` ChromeDriver chromeDriver = new ChromeDriver(capabilities); chromeDriver.manage().window().maximize(); ```<issue_comment>username_1: Here are the arguments I use: ``` ArrayList args = new ArrayList(); args.add("enable-automation"); args.add("test-type=browser"); args.add("disable-plugins"); args.add("disable-infobars"); args.add("disable-extensions"); options.put("args", args); ``` I think you need to at least add the "enable-automation" argument. Upvotes: 0 <issue_comment>username_2: The error says it all : ``` Exception in thread "main" org.openqa.selenium.WebDriverException: unknown error: failed to write automation extension zip (Driver info: chromedriver=2.36.540470 (e522d04694c7ebea4ba8821272dbef4f9b818c91),platform=Windows NT 6.1.7601 SP1 x86_64) (WARNING: The server did not provide any stacktrace information) Command duration or timeout: 218 milliseconds Build info: version: '3.11.0', revision: 'e59cfb3', time: '2018-03-11T20:26:55.152Z' System info: host: '172.16.27.222', ip: '172.16.123.114', os.name: 'Windows 7', os.arch: 'amd64', os.version: '6.1', java.version: '1.8.0_77' Driver info: driver.version: ChromeDriver ``` It is clear from your error stack trace that the **driver.version** is not getting recognized. Your main issue is the **version compatibility** between the binaries you are using as follows : * You are using *chromedriver=2.36* * You are using *chrome=65.0* (as per your question) * You are using *Selenium Client v3.11.0* * Your *JDK version* is **1.8.0\_77** which is pretty ancient. So there is a clear mismatch between the *JDK v8u77* , *Selenium Client v3.11.0* , *ChromeDriver* version (*v2.36*) and the *Chrome Browser* version (*v65.0*) Solution -------- * Upgrade *JDK* to recent levels [**JDK 8u162**](http://www.oracle.com/technetwork/java/javase/8u162-relnotes-4021436.html). * *Clean* your *Project Workspace* through your *IDE* and *Rebuild* your project with required dependencies only. * Use [*CCleaner*](https://www.ccleaner.com/ccleaner) tool to wipe off all the OS chores before and after the execution of your *test Suite*. * If your base *Chrome* version is too old, then uninstall it through [*Revo Uninstaller*](https://www.revouninstaller.com/revo_uninstaller_free_download.html) and install a recent GA and released version of Chrome. * Execute your `@Test`. Additional Considerations ------------------------- * If you intend to use *ChromeDriver* and *Chrome Browser*, while initializing the *WebDriver* instance instead of using the *ChromeDriver* implementation use the *WebDriver* interface as follows : ``` WebDriver chromeDriver = new ChromeDriver(o); ``` * If you intend to maximize the browser window, instead of using `chromeDriver.manage().window().maximize();` use the `ChromeOptions` instance as follows : ``` ChromeOptions options = new ChromeOptions(); options.addArguments("start-maximized"); ``` Upvotes: 1
2018/03/14
954
3,054
<issue_start>username_0: In jxbrowser application, I am trying to zoom in or out to change the application scale with CTRL + wheel up/ down. Is there any option to allow CTRL + wheel up or down in order to change the application scale?<issue_comment>username_1: Here are the arguments I use: ``` ArrayList args = new ArrayList(); args.add("enable-automation"); args.add("test-type=browser"); args.add("disable-plugins"); args.add("disable-infobars"); args.add("disable-extensions"); options.put("args", args); ``` I think you need to at least add the "enable-automation" argument. Upvotes: 0 <issue_comment>username_2: The error says it all : ``` Exception in thread "main" org.openqa.selenium.WebDriverException: unknown error: failed to write automation extension zip (Driver info: chromedriver=2.36.540470 (e522d04694c7ebea4ba8821272dbef4f9b818c91),platform=Windows NT 6.1.7601 SP1 x86_64) (WARNING: The server did not provide any stacktrace information) Command duration or timeout: 218 milliseconds Build info: version: '3.11.0', revision: 'e59cfb3', time: '2018-03-11T20:26:55.152Z' System info: host: '172.16.27.222', ip: '172.16.123.114', os.name: 'Windows 7', os.arch: 'amd64', os.version: '6.1', java.version: '1.8.0_77' Driver info: driver.version: ChromeDriver ``` It is clear from your error stack trace that the **driver.version** is not getting recognized. Your main issue is the **version compatibility** between the binaries you are using as follows : * You are using *chromedriver=2.36* * You are using *chrome=65.0* (as per your question) * You are using *Selenium Client v3.11.0* * Your *JDK version* is **1.8.0\_77** which is pretty ancient. So there is a clear mismatch between the *JDK v8u77* , *Selenium Client v3.11.0* , *ChromeDriver* version (*v2.36*) and the *Chrome Browser* version (*v65.0*) Solution -------- * Upgrade *JDK* to recent levels [**JDK 8u162**](http://www.oracle.com/technetwork/java/javase/8u162-relnotes-4021436.html). * *Clean* your *Project Workspace* through your *IDE* and *Rebuild* your project with required dependencies only. * Use [*CCleaner*](https://www.ccleaner.com/ccleaner) tool to wipe off all the OS chores before and after the execution of your *test Suite*. * If your base *Chrome* version is too old, then uninstall it through [*Revo Uninstaller*](https://www.revouninstaller.com/revo_uninstaller_free_download.html) and install a recent GA and released version of Chrome. * Execute your `@Test`. Additional Considerations ------------------------- * If you intend to use *ChromeDriver* and *Chrome Browser*, while initializing the *WebDriver* instance instead of using the *ChromeDriver* implementation use the *WebDriver* interface as follows : ``` WebDriver chromeDriver = new ChromeDriver(o); ``` * If you intend to maximize the browser window, instead of using `chromeDriver.manage().window().maximize();` use the `ChromeOptions` instance as follows : ``` ChromeOptions options = new ChromeOptions(); options.addArguments("start-maximized"); ``` Upvotes: 1
2018/03/14
762
2,884
<issue_start>username_0: So this is what i had first: ``` $app->get('/object/{id:[0-9]+}', function ($request, $response, $args) { $id = (int)$args['id']; $this->logger->addInfo('Get Object', array('id' => $id)); $mapper = new ObjectMapper($this->db); $object = $mapper->getObjectById($id); return $response->withJson((array)$object); }); ``` It worked well and outputted the whole DB Object as a nice JSON String. Now i reorganized everything a little on MVC basis and this is whats left: ``` $app->get('/object/{id:[0-9]+}', ObjectController::class . ':show')->setName('object.show'); ``` It also works, but i don't get any Output. If i put a var\_dump before the DB Object is there, but how do i get a JSON String from that again? Here comes the Controller ``` php namespace Mycomp\Controllers\Object; use \Psr\Http\Message\ServerRequestInterface as Request; use \Psr\Http\Message\ResponseInterface as Response; use Interop\Container\ContainerInterface; use Mycomp\Models\Object; class ObjectController { protected $validator; protected $db; protected $auth; protected $fractal; public function __construct(ContainerInterface $container) { $this-db = $container->get('db'); $this->logger = $container->get('logger'); } public function show(Request $request, Response $response, array $args) { $id = (int)$args['id']; $this->logger->addInfo('Get Object', array('id' => $id)); $object = new Object($this->db); return $object->getObjectById($id); } } ```<issue_comment>username_1: As username_2 said in comment, you need to return `Response` object ``` public function show(Request $request, Response $response, array $args) ... return $response->withJson($object->getObjectById($id)); } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: In order for Slim to send HTTP response to client, route callback must return some data that Slim understands. That type of data, [according to Slim documentation](https://www.slimframework.com/docs/) is a `PSR 7 Response` object. This is important, because what the route callback returns will not necessarily be sent to client exactly as is . It might be used by middlewares to teak the response before sending it to the client. the `$response` object, injected by Slim into your route callbacks is used for that purpose. Slim also provides some helper methods like 'withJson` to generate a proper (PSR 7) JSON response with proper HTTP headers. So as I said in comment, you need to return response object ``` public function show(Request $request, Response $response, array $args) // Prepare what you want to return and // Encode output data as JSON and return a proper response using withJson method return $response->withJson($object->getObjectById($id)); } ``` Upvotes: 0
2018/03/14
871
3,165
<issue_start>username_0: I have set status bar style to Light. When my MainPage opens, the status bar text becomes black. My MainPage is a TabbedPage. When I set MainPage to a page that has ContentPage, the status bar text becomes white as expected. Info.plist ``` UIStatusBarStyle UIStatusBarStyleLightContent UIViewControllerBasedStatusBarAppearance ``` AppDelegate.cs ``` app.SetStatusBarStyle(UIStatusBarStyle.LightContent, true); UIApplication.SharedApplication.StatusBarStyle = UIStatusBarStyle.LightContent; ``` App.xaml.cs ``` MainPage = new Views.MainPage(); ```<issue_comment>username_1: If you want to change the status bar color within the entire app to white you need to change the `View controller-based status bar appearance` property in your info.plist, which by default sets the status bar style at the view controller level and not the application level. [![info.plist](https://i.stack.imgur.com/2KH6f.png)](https://i.stack.imgur.com/2KH6f.png) so if this is what you had in your info.plist it should be correct, which you may need to delete the app and clean/rebuild before next deploy ``` UIStatusBarStyle UIStatusBarStyleLightContent UIViewControllerBasedStatusBarAppearance ``` However, if you need to change the status bar color per `Page` you're probably going to have to write a custom page renderer. Upvotes: 0 <issue_comment>username_2: First of all, the status bar is not the same as the navigation bar. [![definitions](https://i.stack.imgur.com/WmH8T.png)](https://i.stack.imgur.com/WmH8T.png) The `theme` (i.e. colours) of the application (for Android) can be set in `App1.Droid.Resources.values.style.xml`. Here you need to understand which property targets which value in the `styles.xml` file. [![file](https://i.stack.imgur.com/CiphQ.png)](https://i.stack.imgur.com/CiphQ.png) [This](https://material.io/guidelines/style/color.html#color-color-system) link in styles.xml contains a lot of information about best practices when styling application colour. This diagram shows which property will change which value: [![tags](https://i.stack.imgur.com/ZDBDY.png)](https://i.stack.imgur.com/ZDBDY.png) Upvotes: 1 <issue_comment>username_3: **The simplest solution**: get the `statusBar` via `keyValue`. 1. Delete the key values you added in `info.plist`. 2. Add the following code in `FinishedLaunching` ``` public override bool FinishedLaunching(UIApplication app, NSDictionary options) { global::Xamarin.Forms.Forms.Init(); LoadApplication(new App()); UIView statusBar = UIApplication.SharedApplication.ValueForKey(new NSString("statusBarWindow")).ValueForKey(new NSString("statusBar")) as UIView; statusBar.BackgroundColor = UIColor.White; return base.FinishedLaunching(app, options); } ``` [![enter image description here](https://i.stack.imgur.com/WIEX8.png)](https://i.stack.imgur.com/WIEX8.png) Upvotes: 0 <issue_comment>username_4: The problem was caused because I had enclosed child pages in NavigationPages. MainPage.xaml: ``` ``` When I removed the NavigationPages which in my case were unnecessary, the status bar text became white: ``` ``` Upvotes: 1 [selected_answer]
2018/03/14
391
1,355
<issue_start>username_0: Is there any way to run jobs from the stage in subsequent order? I've tried to do it with dependecies ``` job1: stage:deploy ... job2: stage:deploy dependencies: - job1 ``` but it gives me an error "dependency job1 is not defined in prior stages". Is there any workaround?<issue_comment>username_1: No. This is not possible by design. You will have to define more [stages](https://docs.gitlab.com/ce/ci/yaml/#stages). As the [stages](https://docs.gitlab.com/ce/ci/yaml/#stages) docs describe: > > 1. Jobs of the same stage are run in parallel. > > > Upvotes: 6 [selected_answer]<issue_comment>username_2: It might become possible at some point in the future (as of January 2021). [Progress is being tracked here](https://gitlab.com/gitlab-org/gitlab/-/issues/30632) Upvotes: 2 <issue_comment>username_3: You might have found out the answer by now, but still answering for future audiences coming to post when facing similar issue. The error itself says "dependency job1 is not defined in prior stages", in your example both jobs having same name i.e "stage: deploy". so that is the reason its not picking up the dependencies rule, also with new gitlab version, need clause can be used now. ``` Job1: Stage: A Job2: Stage: B needs: ["Job1"] ``` This way, Job2 will get dependent on Job1 Upvotes: 2
2018/03/14
1,036
3,518
<issue_start>username_0: I am currently trying to make a game app in Windows with XAudio2 and I cannot figure out how to make the application not block when playing a sound. I tried calling a new thread in the samples in [this repository.](https://github.com/walbourn/directx-sdk-samples/tree/master/XAudio2) But it will just cause an error. I tried passing a reference to the mastering voice in the function but then it just raises a "XAudio2: Must create a mastering voice first" error. Am I missing something? I am just trying to make it play two sounds at once and build from there. I went over the documentation but it's very vague.<issue_comment>username_1: XAudio2 is a non-blocking API. To play two sounds simultaneously, you need two 'source voices' and one 'mastering voice' at a minimum. ``` DX::ThrowIfFailed( CoInitializeEx( nullptr, COINIT_MULTITHREADED ) ); Microsoft::WRL::ComPtr pXAudio2; // Note that only IXAudio2 (and APOs) are COM reference counted DX::ThrowIfFailed( XAudio2Create( pXAudio2.GetAddressOf(), 0 ) ); IXAudio2MasteringVoice\* pMasteringVoice = nullptr; DX::ThrowIfFailed( pXAudio2->CreateMasteringVoice( &pMasteringVoice ) ); IXAudio2SourceVoice\* pSourceVoice1 = nullptr; DX::ThrowIfFailed( pXaudio2->CreateSourceVoice( &pSourceVoice1, &wfx ) ) // The default 'pSendList' will be just to the pMasteringVoice ); IXAudio2SourceVoice\* pSourceVoice2 = nullptr; DX::ThrowIfFailed( pXaudio2->CreateSourceVoice( &pSourceVoice2, &wfx) ) // Doesn't have to be same format as other source voice // And doesn't have to match the mastering voice either ); DX::ThrowIfFailed( pSourceVoice1->SubmitSourceBuffer( &buffer ) ); DX::ThrowIfFailed( pSourceVoice2->SubmitSourceBuffer( &buffer /\* could be different WAV data or not \*/) ); DX::ThrowIfFailed( pSourceVoice1->Start( 0 ); ); DX::ThrowIfFailed( pSourceVoice2->Start( 0 ); ); ``` > > You should take a look at the samples on [GitHub](https://github.com/walbourn/directx-sdk-samples/tree/master/XAudio2) as well as [DirectX Tool Kit for Audio](https://github.com/Microsoft/DirectXTK/wiki/Audio) > > > If you wanted to ensure both source voices started at precisely the same time, you'd use: ``` DX::ThrowIfFailed( pSourceVoice1->Start( 0, 1 ); ); DX::ThrowIfFailed( pSourceVoice2->Start( 0, 1 ); ); DX::ThrowIfFailed( pSourceVoice2->CommitChanges( 1 ); ); ``` Upvotes: 2 <issue_comment>username_2: If you want to play multiple sounds simultaneously have a `playSound` function and launch various threads to play your various sounds each one of a certain source voice. XAudio2 will take care of mapping each sound to available channels (or if you have a more advanced system you can specify the mapping yourself using `IXAudio2Voice::SetOutputMatrix`). ``` void playSound( IXAudio2SourceVoice* sourceVoice ) { BOOL isPlayingSound = TRUE; XAUDIO2_VOICE_STATE soundState = {0}; HRESULT hres = sourceVoice->Start( 0u ); while ( SUCCEEDED( hres ) && isPlayingSound ) {// loop till sound completion sourceVoice->GetState( &soundState ); isPlayingSound = ( soundState.BuffersQueued > 0 ) != 0; Sleep( 100 ); } } ``` For example to play two sounds simultaneously: ``` IXAudio2SourceVoice* pSourceVoice1 = nullptr; IXAudio2SourceVoice* pSourceVoice2 = nullptr; // setup the source voices, load the sounds etc.. std::thread thr1{ playSound, pSourceVoice1 }; std::thread thr2{ playSound, pSourceVoice2 }; thr1.join(); thr2.join(); ``` Upvotes: 0
2018/03/14
359
1,130
<issue_start>username_0: My PhpStorm IDE does not recognize the fluid view helper. Currently working with PhpStorm 2017.3.3 I installed the Fluid Plugin from sgalinski and added in the Schema and DTDs following xsd file <https://fluidtypo3.org/schemas/fluid-master.xsd> and linked it to following URI <http://typo3.org/ns/TYPO3/CMS/Fluid/ViewHelpers> Does anyone know how I can fix this? It would be great when autocompletion would work for fluid html files [![View helper can't be found](https://i.stack.imgur.com/BHcAJ.png)](https://i.stack.imgur.com/BHcAJ.png)<issue_comment>username_1: Have a look for these blog posts: * <http://insight.helhum.io/post/85031122475/xml-schema-auto-completion-in-phpstorm> * <http://insight.helhum.io/post/130270697975/updated-fluid-schema-urls> Upvotes: 1 <issue_comment>username_2: You may also check out the PhpStorm Plugins - there is one available for FLUID. <https://www.sgalinski.de/typo3-produkte-webentwicklung/typo3-fluid-phpstorm-intellij/> Upvotes: 1 <issue_comment>username_3: Try adding the Typo3 core as external library in your PhpStorm Projekt. Upvotes: 3 [selected_answer]
2018/03/14
1,035
3,831
<issue_start>username_0: I'm working on a new application, based on templates, STL, namespaces, ... (my collegues have taken all necessary steps to make a mess), now I just want to add a property to a class, and this does not work, let me show you): Within a header-file: ``` namespace utils{ class Logging_Manager : public Singleton { friend Singleton; Logging\_Manager(); private: std::vector> logging\_modules; LogLevelType loglevel; // 0 : no logging, 1 : info logging, (2-4 are not used), 5 : debug, 6 : everything public: Timer getDebuggerTimer; Timer getFileTimer; ``` I've just added the last entries `getDebuggerTimer` and `getFileTimer` myself and this does not compile, due to the compiler errors C3646 and C4430. All I mean is that both are properties of the type `Timer`, I don't mean those things to be templates of some methods which might be abstract, virtual, or whatsoever, no they are just to meant as properties, nothing more nothing less. As I didn't find a way to add both timers to my header file, my boss has just solved the issue as follows: ``` class Timer; // this class is defined elsewhere, how can it be put here and make things work? namespace utils{ class Logging_Manager : public Singleton { friend Singleton; Logging\_Manager(); private: std::shared\_ptr getDebuggerTimer; std::shared\_ptr getFileTimer; ``` In other words: he did not add an inclusion, but he added a reference to something the header does not know: `class Timer`. In top of this, he added a shared pointer, which magically did something. I'm completely lost here: it looks like STL has added programming rules like: - In case you want something to work, don't add an inclusion, but add a reference (but how does your code know what the reference means?) - You can add shared pointers (or other STL inventions), who will make your code work. ????? Can anybody shed a light to this?<issue_comment>username_1: If you want to aggregate a `Timer` into your class defintion, the compiler needs to know what a `Timer` is. So, you could use `#include "Timer.h"`(use the right header). What your boss did is two-fold 1. use a pointer instead of a member object. Thus allowing it to work with a forward declaration instead of an include. 2. use a smart pointer instead of a raw pointer. This is good practice. Upvotes: 3 [selected_answer]<issue_comment>username_2: This has nothing to do with template and STL. You should look at what **forward declaration** is: What your boss did with that ``` class Timer; ``` is to say to the compiler "hey look, there's something called Timer which you don't know yet, trust me and reserve for that some space". Compiler answer ok, but it has to know how much space to allocate for costructing that object. Since you use an object and do not include the header file containing it, compiler doesn't know much space it takes and gives you an error. If you use a pointer instead (or smart pointer), the compiler does need to reserve only the space for a pointer and it can do it. Of coure then you have to include the header of time if you want to use it in the implementation. For the same reason, you cannot forward declare a class you're going to inherit. timer.h ``` class Timer { public: void time() { std::cout << "time!"; } } ``` yourclass.h ``` class Timer; // forward declaration class Manager { public: void doStuff() {} private: Timer* t1; // this will work std::shared_pointer t2; // this will work //Timer t3; // this one don't } ``` } this case **does not work**: ``` class Timer; class SpecialTimer : public Timer { // ... do stuff } ``` Some link about forward declaration: <https://en.wikipedia.org/wiki/Forward_declaration> [This Post](https://stackoverflow.com/a/4757718/8805095) has a good explanation. Upvotes: 0
2018/03/14
2,275
8,223
<issue_start>username_0: I'm trying to create a dataset from a CSV file with 784-bit long rows. Here's my code: ``` import tensorflow as tf f = open("test.csv", "r") csvreader = csv.reader(f) gen = (row for row in csvreader) ds = tf.data.Dataset() ds.from_generator(gen, [tf.uint8]*28**2) ``` I get the following error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () 12 gen = (row for row in csvreader\_pat\_trn) 13 ds = tf.data.Dataset() ---> 14 ds.from\_generator(gen, [tf.uint8]\*28\*\*2) ~/Documents/Programming/ANN/labs/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset\_ops.py in from\_generator(generator, output\_types, output\_shapes) 317 """ 318 if not callable(generator): --> 319 raise TypeError("`generator` must be callable.") 320 if output\_shapes is None: 321 output\_shapes = nest.map\_structure( TypeError: `generator` must be callable. ``` The [docs](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) said that I should have a generator passed to `from_generator()`, so that's what I did, `gen` is a generator. But now it's complaining that my generator isn't **callable**. How can I make the generator callable so I can get this to work? **EDIT:** I'd like to add that I'm using python 3.6.4. Is this the reason for the error?<issue_comment>username_1: [From the docs](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator), which you linked: > > The `generator` argument must be a callable object that returns an > object that support the `iter()` protocol (e.g. a generator function) > > > This means you should be able to do something like this: ``` import tensorflow as tf import csv with open("test.csv", "r") as f: csvreader = csv.reader(f) gen = lambda: (row for row in csvreader) ds = tf.data.Dataset() ds.from_generator(gen, [tf.uint8]*28**2) ``` In other words, the function you pass must produce a generator when called. This is easy to achieve when making it an anonymous function (a `lambda`). Alternatively try this, which is closer to how it is done in the docs: ``` import tensorflow as tf import csv def read_csv(file_name="test.csv"): with open(file_name) as f: reader = csv.reader(f) for row in reader: yield row ds = tf.data.Dataset.from_generator(read_csv, [tf.uint8]*28**2) ``` (If you need a different file name than whatever default you set, you can use `functools.partial(read_csv, file_name="whatever.csv")`.) The difference is that the `read_csv` function returns the generator object when called, whereas what you constructed is already the generator object and equivalent to doing: ``` gen = read_csv() ds = tf.data.Dataset.from_generator(gen, [tf.uint8]*28**2) # does not work ``` Upvotes: 2 <issue_comment>username_2: The `generator` argument (perhaps confusingly) should not actually be a generator, but a callable returning an iterable (for example, a generator function). Probably the easiest option here is to use a `lambda`. Also, a couple of errors: 1) [`tf.data.Dataset.from_generator`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) is meant to be called as a class factory method, not from an instance 2) the function (like a few other in TensorFlow) is weirdly picky about parameters, and it wants you to give the sequence of dtypes and each data row as `tuple`s (instead of the `list`s returned by the CSV reader), you can use for example `map` for that: ``` import csv import tensorflow as tf with open("test.csv", "r") as f: csvreader = csv.reader(f) ds = tf.data.Dataset.from_generator(lambda: map(tuple, csvreader), (tf.uint8,) * (28 ** 2)) ``` Upvotes: 5 [selected_answer]<issue_comment>username_3: Yuck, two years later... But hey! Another solution! :D This might not be the cleanest answer but for generators that are more complicated, you can use a decorator. I made a generator that yields two dictionaries, for example: ```py >>> train,val = dataloader("path/to/dataset") >>> x,y = next(train) >>> print(x) {"data": [...], "filename": "image.png"} >>> print(y) {"category": "Dog", "category_id": 1, "background": "park"} ``` When I tried using the `from_generator`, it gave me the error: ``` >>> ds_tf = tf.data.Dataset.from_generator( iter(mm), ({"data":tf.float32, "filename":tf.string}, {"category":tf.string, "category_id":tf.int32, "background":tf.string}) ) TypeError: `generator` must be callable. ``` But then I wrote a decorating function ``` >>> def make_gen_callable(_gen): def gen(): for x,y in _gen: yield x,y return gen >>> train_ = make_gen_callable(train) ``` ``` >>> train_ds = tf.data.Dataset.from_generator( train_, ({"data":tf.float32, "filename":tf.string}, {"category":tf.string, "category_id":tf.int32, "background":tf.string}) ) >>> for x,y in train_ds: break >>> print(x) {'data': , 'filename': } >>> print(y) {'category': , 'category\_id': , 'background': } ``` But now, note that in order to iterate `train_`, one has to call it ``` >>> for x,y in train_(): do_stuff(x,y) ... ``` Upvotes: 3 <issue_comment>username_4: Pushing 5 years late, but given the two-hour-long headache this has caused me -- for those who want a more generalisable solution: ```py class TFDatasetFromGenerator(): def __init__(self, somearg=""): self.someparam = somearg '''etc; eg load S3 keys into a dataframe, s3keyDF''' self.DataFrame= s3keyDF def some_modifying_function(self, someotherarg=""): ''' do some stuff, eg on self.someparam etc or split self.DataFrame into train/test data refs etc ''' def generate(self, arg1="", arg2=True, batch_size=128): return tf.data.Dataset.from_generator( self._generate( arg1=arg1, arg2=arg2, batch_size = batch_size ), (tf.float32, tf.float32), ((None,128,400,4), (None,1)) ) ''' modify the data output shape as required; we present x and y (input data and labels) as a tuple with both x and y being tf.float32 x having the shape (batch_size,128,400,4) y having the shape (batch_size,1). ''' def _generate(self, arg1="", arg2=True, batch_size=128): ''' _generate defines and returns the function `gen`. returning the function makes _generate a callable _generate can therefore be used as the first argument to tf.data.Dataset.from_generator() ''' def gen(): '''do preprocessing, eg filter self.dataset to return just training or test portions of the dataset eg: ''' if(arg1="train"): data_refs = list( self.TrainDataFrame.data_referance.unique() ) label_refs = list( self.TrainDataFrame.label_refernce.unique() ) ''' etc ''' random.shuffle(data) ## get the sample count sample_count = len(data_refs) while True: sample_index = 0 while sample_index < sample_count: start = sample_index end = min(start + batch_size, sample_count) # Load data from DataFames x = data_refs[start: end] y = label_refs[start: end] data = [] for key in x: ''' d = load your data ''' data.append(d) labels = [] for key in y: ''' l = load your labels ''' labels.append(l) sample_index += batch_size yield (np.array(data),np.array(labels)) return gen ``` Hope this helps someone! Upvotes: 0
2018/03/14
2,877
10,294
<issue_start>username_0: We are performing performance testing and tuning activities in of our projects. I have used JVM configs mentioned in this [article](http://blog.sokolenko.me/2014/11/javavm-options-production.html) Exact JVM options are: ``` set "JAVA_OPTS=-Xms1024m -Xmx1024m -XX:MetaspaceSize=512m -XX:MaxMetaspaceSize=1024m -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -XX:+PrintHeapAtGC -Xloggc:C:\logs\garbage_collection.logs -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=C:\logs\heap_dumps\'date'.hprof -XX:+UnlockDiagnosticVMOptions" ``` Still we see that the issue is not resolved. I am sure that there are some issues within our code(Thread implementation etc) and the external libraries that we use(like log4j etc) but I was at least hoping some performance improvement with employing these JVM tuning options. The reports from [Gceasy.io](http://gceasy.io) suggest that: > > It looks like your application is waiting due to lack of compute > resources > (either CPU or I/O cycles). Serious production applications shouldn't be > stranded because of compute resources. In 1 GC event(s), 'real' time took > more than 'usr' + 'sys' time. > > > Some known code issues: > > 1. There is lot of network traffic to some external webapp which accepts only one > connection at a time. **But this delay is acceptable for our application.** > 2. Some of threads block on Log4j. We are using Log4j for console, db and file appending. > 3. There can be issue with MySQL tuning as well. But for now, we want to rule out these possibilities and just understand any other factors that might be affecting our execution. > > > What I was hoping with the tuning that, there should be less GC activity, metaspace should be managed properly. But this is not observed why? Here are some of the snapshots: 1. Here we can how metaspace is stuck at 40MB and do not exceed that. There is a lot of GC activity also been seen [![Metaspace usage, CPU and GC activity](https://i.stack.imgur.com/fUral.png)](https://i.stack.imgur.com/fUral.png) 2. Another image depicting overall system state: [![System State](https://i.stack.imgur.com/aKuaw.png)](https://i.stack.imgur.com/aKuaw.png) **What could be our issue? Need some definitive pointers on these!** **UPDATE-1:** Disk usage monitoring [![enter image description here](https://i.stack.imgur.com/diHIP.png)](https://i.stack.imgur.com/diHIP.png) **UPDATE-2:** Added the screenshot with heap. [![enter image description here](https://i.stack.imgur.com/xKWp0.png)](https://i.stack.imgur.com/xKWp0.png) **SOME MORE UPDATES:** Well, I did not mention earlier that our processing involves selenium (Test automation) execution which spawns more than couple of web-browsers using the chrome/ firefox **webdrivers**. While monitoring I saw that in the background processes, Chrome is using a lot of memory. Can this be a possible reason for slow down? Here are the screenshots for the same: [![enter image description here](https://i.stack.imgur.com/guQSu.png)](https://i.stack.imgur.com/guQSu.png) Other pic that shows the background processes [![enter image description here](https://i.stack.imgur.com/b6MLk.png)](https://i.stack.imgur.com/b6MLk.png) **EDIT No-5:** Adding the GC logs **[GC\_LOGS\_1](https://www.dropbox.com/s/97tvbn13y0nubsq/14_march_garbage_collection.logs?dl=0)** [GC\_LOGS\_2](https://www.dropbox.com/s/v0i7mj4xso0hksb/garbage_collection.current?dl=0) Thanks in advance!<issue_comment>username_1: First thing I will check is Disk IO... If your processor is not loaded 100% during performance testing most likely Disk IO is a problem(e.g. you are using hard drive)... Just switch for SSD(or in-memory disk) to resolve this GC just does its work... You re selected `concurrent collector` to perform GC. From the [documentation](https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/collectors.html): > > The mostly concurrent collector performs most of its work concurrently (for example, while the application is still running) to keep garbage collection pauses short. It is designed for applications with medium-sized to large-sized data sets in which response time is more important than overall throughput because the techniques used to minimize pauses can reduce application performance. > > > What you see matches this description: GC takes time, but "mainly" do not pause application for a long time --- As an option you may try to enable `Garbage-First Garbage Collector` (use `-XX:+UseG1GC`) and compare results. From the docs: > > G1 is planned as the long-term replacement for the Concurrent Mark-Sweep Collector (CMS). Comparing G1 with CMS reveals differences that make G1 a better solution. One difference is that G1 is a compacting collector. Also, G1 offers more predictable garbage collection pauses than the CMS collector, and allows users to specify desired pause targets. > > > This collector allows to set maximum GC phase length, e.g. add `-XX:MaxGCPauseMillis=200` option, which says that you're OK until GC phase takes less than 200ms. Upvotes: 1 <issue_comment>username_2: You don't seem to have a GC problem. Here's a plot of your GC pause times over the course of more than 40 hours of your app running: [![scatter plot of GC pause times - log1](https://i.stack.imgur.com/Qz7jB.png)](https://i.stack.imgur.com/Qz7jB.png) From this graph we can see that most of the GC pause times are below 0.1 seconds, some of them are in the 0.2-0.4 seconds, but since the graph itself contains 228000 data points, it's hard to figure out how the data is distributed. We need a histogram plot containing the distribution of the GC pause times. Since the vast majority of these GC pause times are very low, with a very few outliers, plotting the distribution in a histogram linearly is not informative. So I created a plot containing the distribution of the logarithm of those GC pause times: [![logarithmic histogram of gc log 1](https://i.stack.imgur.com/ffYCp.png)](https://i.stack.imgur.com/ffYCp.png) *In the above image, the X axis is the 10 base logarithm of the GC pause time, the Y axis is the number of occurences. The histogram has 500 bins.* As you can see from these two graphs, the GC pause times are clustered into two groups, and most of the GC pause times are very low on the order of magnitude of milliseconds or less. If we plot the same histogram on a log scale on the y axis too, we get this graph: [![log-log histogram of gc log 1](https://i.stack.imgur.com/YpwFZ.png)](https://i.stack.imgur.com/YpwFZ.png) *In the above image, the X axis is the 10 base logarithm of the GC pause time, the Y axis is the 10 based logarithm of the number of occurences. The histogram has 50 bins.* On this graph it becomes visible, that we you have a few tens of GC pause times that might be measurable for a human, which are in the order of magnitude of tenths of seconds. These are probably those 120 full GCs that you have in your first log file. You might be able to reduce those times further if you were using a computer with more memory and disabled swap file, so that all of the JVM heap stays in RAM. Swapping, especially on a non-SSD drive can be a real killer for the garbage collector. --- I created the same graphs for the second log file you posted, which is a much smaller file spanning of around 8 minutes of time, consisting of around 11000 data points, and I got these images: [![scatter plot of GC pause times - log2](https://i.stack.imgur.com/RBO52.png)](https://i.stack.imgur.com/RBO52.png) [![logarithmic histogram of gc log 2](https://i.stack.imgur.com/yWKef.png)](https://i.stack.imgur.com/yWKef.png) *In the above image, the X axis is the 10 base logarithm of the GC pause time, the Y axis is the number of occurences. The histogram has 500 bins.* [![log-log histogram of gc log 2](https://i.stack.imgur.com/OcvkX.png)](https://i.stack.imgur.com/OcvkX.png) *In the above image, the X axis is the 10 base logarithm of the GC pause time, the Y axis is the 10 based logarithm of the number of occurences. The histogram has 50 bins.* In this case, since you've been running the app on a different computer and using different GC settings, the distribution of the GC pause times is different from the first log file. Most of them are in the sub-millisecond range, with a few tens, maybe hundreds in the hundredth of a second range. We also have a few outliers here that are in the 1-2 seconds range. There are 8 such GC pauses and they all correspond to the 8 full GCs that occured. The difference between the two logs and the lack of high GC pause times in the first log file might be attributed to the fact that the machine running the app that produced the first log file has double the RAM vs the second (8GB vs 4GB) and the JVM was also configured to run the parallel collector. If you're aiming for low latency, you're probably better off with the first JVM configuration as it seems that the full GC times are consistently lower than in the second config. It's hard to tell what your issue is with your app, but it seems it's not GC related. Upvotes: 2 <issue_comment>username_3: Check you log files. I have seen similar issue in production recently and guess what was the problem. Logger. We use log4j non asysnc but it is not log4j issue. Some exception or condition led to log around a million lines in the log file in span of 3 minutes. Coupled with high volume and other activities in the system, that led to high disk I/O and web application became unresponsive. [![enter image description here](https://i.stack.imgur.com/EbBIn.png)](https://i.stack.imgur.com/EbBIn.png) [![enter image description here](https://i.stack.imgur.com/rNb7N.png)](https://i.stack.imgur.com/rNb7N.png) Upvotes: 1
2018/03/14
2,162
6,558
<issue_start>username_0: I'm making an array that will look like this `var qwe = [[a,b],[c],[d]]` with the purpose of a and b being the identifiers. a - d are coming from reading the DOM. My current JS is doing what I want it to but I want to combine the similar arrays by their identifiers. Running the below code will give me ``` qwe =[ [100,200],[3],[2], [200, 300],[12],[4], [100,200],[2],[6] ] ``` but I want the final array to add the similar arrays by their identifiers so it will end up looking like (based on previous example) ``` qwe =[ [100,200],[5],[8], [200, 300],[12],[4] ] ``` HTML ---- ``` | ID | Location | Value | Other | | --- | --- | --- | --- | | | | | | | | | | | | | | | | ``` JS -- ``` var table = document.querySelectorAll('[name="itinValue"]'); var qwe = []; for(var i = 0; i < table.length; i++) { var a = document.getElementsByName('itinValue')[i].value; var b = document.getElementsByName('location')[i].value; var c = document.getElementsByName('num')[i].value; var d = document.getElementsByName('other')[i].value; var x = [[a,b],[c],[d]]; //Compare,find,add here //if identifiers do not exist qwe.push(x); } ``` This is a fiddle to my example that also correctly outputs the html too <https://jsfiddle.net/3oge7wxg/125/><issue_comment>username_1: It looks like you want something called an associative array, "dict" in python, a key/value pairing, with the keys being your [a,b] part and the values your [c,d] parts. You can emulate this in JavaScript through objects. Further reading is here: * [JavaScript Associative Arrays Demystified](http://blog.xkoder.com/2008/07/10/javascript-associative-arrays-demystified/) * [JavaScript Basics: How to create a Dictionary with Key/Value pairs](http://pietschsoft.com/post/2015/09/05/JavaScript-Basics-How-to-create-a-Dictionary-with-KeyValue-pairs) Upvotes: 0 <issue_comment>username_2: You could try to find the item for updating and if not push the new array. ```js var table = document.querySelectorAll('[name="itinValue"]'), qwe = [], a, b, c, d, i, item; for (i = 0; i < table.length; i++) { a = +document.getElementsByName('itinValue')[i].value; b = +document.getElementsByName('location')[i].value; c = +document.getElementsByName('num')[i].value; d = +document.getElementsByName('other')[i].value; item = qwe.find(([[l, r]]) => l === a && r === b); if (item) { item[1][0] += c; item[2][0] += d; } else { qwe.push([[a, b], [c], [d]]); } } console.log(qwe); ``` ```html | ID | Location | Value | Other | | --- | --- | --- | --- | | | | | | | | | | | | | | | | ``` Version with [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map). ```js var table = document.querySelectorAll('[name="itinValue"]'), qwe = [], a, b, c, d, i, item map = new Map; for (i = 0; i < table.length; i++) { a = +document.getElementsByName('itinValue')[i].value; b = +document.getElementsByName('location')[i].value; c = +document.getElementsByName('num')[i].value; d = +document.getElementsByName('other')[i].value; item = map.get([a, b].join('|')); if (item) { item[1][0] += c; item[2][0] += d; } else { item = [[a, b], [c], [d]] map.set([a, b].join('|'), item); qwe.push(item); } } console.log(qwe); ``` ```html | ID | Location | Value | Other | | --- | --- | --- | --- | | | | | | | | | | | | | | | | ``` Upvotes: -1 [selected_answer]<issue_comment>username_3: I would use objects, just create a composite key: ``` var table = document.querySelectorAll('[name="itinValue"]'); var qwe = {}; for(var i = 0; i < table.length; i++) { var a = document.getElementsByName('itinValue')[i].value; var b = document.getElementsByName('location')[i].value; var c = new Number(document.getElementsByName('num')[i].value); var d = new Number(document.getElementsByName('other')[i].value); var key = a + "_" + b; previousValue = qwe[key]; qwe[key] = previousValue ? [previousValue[0] + c, previousValue[1] + d] : [c, d]; } ``` You can convert to your desired array like so: ``` Object.keys(qwe).map(key => [key.split("_")].concat(qwe[key])); ``` <https://jsfiddle.net/3oge7wxg/161/> Edit: Number constructors; Added fiddle Upvotes: 0 <issue_comment>username_4: There are key facts I am taking note of in your question: * you are looping an array of data. * you are storing data based on a key which is a tuple. * values where key is a match is an addition option. * a,b,c,d are all ints,so if these are strings, you would need to run parseint() if they are strings. This can be done by checking if it is currently type is a string, and if so, convert it. Since it is a tuple and those are not implmented in javascript, you can do something like this. ``` var m = {}; var table = document.querySelectorAll('[name="itinValue"]'); for(var i = 0; i < table.length; i++) { var a = +document.getElementsByName('itinValue')[i].value; var b = +document.getElementsByName('location')[i].value; var c = +document.getElementsByName('num')[i].value; var d = +document.getElementsByName('other')[i].value; var key = a.toString() + "-" + b.toString(); //creates key = "100-200" if (m[key]){ m[key] = [m[key][0] + c, m[key][1] + d] }else{ m[key] = [c,d] } } ``` in the end, your map will now have unique keys and a map that looks like: ``` { "100-200": [5,8], "200-300": [12,4] } ``` and if for whatever reason, you need to break up they keys later, you just split on the key. `map.keys[index].split("-")` I think this is clean, but if you want to go a bit more, you could turn it into a class. You then store the information in qwe. If you need to, you can easily convert that from a map to a list, but it depends on your desired implementation goal. The key difference generally is whether or not you wish to maintain order. qwe is populated only with this information, and given your comment based on this being your implementation not the best one, it gives me enough insight to believe that order isnt really as important as preservation of they key data elements, this key/tuple, and 2 values. Upvotes: 0 <issue_comment>username_5: If you know the amount of fields per row, here is an alternate way of retrieving your array. ``` var qwe = {}; var els = document.querySelectorAll('table#tab input'); for (i=0; i ``` Upvotes: 0
2018/03/14
344
1,303
<issue_start>username_0: I was going through the kernel source code and I found this statement: ``` char *tagp = NULL; /* ...CODE... */ tagp = &descriptor->b_data[sizeof(journal_header_t)]; ``` I wonder **why** this address is stored in a char pointer rather than any other type more related to what it represents, such as maybe `void` if this is an opaque.<issue_comment>username_1: The individual cases may have their explicit use-cases, but in general, this is useful for two reasons. * a `char` pointer has the same alignment requirement as a `void` pointer. * `char` pointer can be used to access (via dereference) any other type of data, starting from lowest addressed byte of the object (Successive increments of the result, up to the size of the object, yield pointers to the remaining bytes of the object.). Also, pointer arithmatic (if needed, is allowed on `char` pointer, not on `void` pointers). Thus, using a `char *` is more robust. Upvotes: 3 [selected_answer]<issue_comment>username_2: If it is an old code then the type `char *` was used instead of the type `void *` because the type `void` was absent in the C at that time. Upvotes: 2 <issue_comment>username_3: Because in the kernel we need to do some operatings on byte.And then we can do operatings like &,<<,>>. Upvotes: 0
2018/03/14
795
2,853
<issue_start>username_0: I am getting the following message in one of my commit message when I view my local git log. > > HEAD -> dev, origin/master, origin/dev, master > > > Can anybody please explain? Update ====== Finally, I've come up with an explanatory answer: * Head -> dev: This is showing the current branch/commit the Head is pointing at * origin/master, origin/dev: It means both the local master and dev branches are in sync with the remote branches master & dev branches * test-delete, master: These are the name of other branches besides the dev branch in the current repository<issue_comment>username_1: `git log` (which is the underlying command for showing history) can annotate the history display in various ways. One of these is to show what branches reference a commit. Specifically the `--decorate` option "Print out the ref names of any commits that are shown" (branches and tags are two examples of refs). Upvotes: 2 <issue_comment>username_2: > > `HEAD -> dev, origin/master, origin/dev, master` > > > [That would really look something like this](https://i.stack.imgur.com/bJBMe.jpg): ``` commit aa1124b89f38eed793e2b9f2d2b2ba5d80a27a20 (HEAD -> dev, origin/master, origin/dev, master) Author: Some Person Date: Sat Apr 14 12:06:02 PDT 2018 ``` That is the result of `git log --decorate` or having `log.decorate` set to `short` in your config. It shows you anything which referrers to each commit (references are things like branches and tags). This is important information for understanding the log. This means that the local branches `dev` and `master`, plus the remote branches `origin/master` and `origin/dev` plus the special reference `HEAD` all point at commit `aa1124b89f38`. `HEAD` is itself a special reference pointing to the currently checked out commit. `HEAD -> dev` says `dev` is the currently checked out branch. Having `dev` and `master` at the same commit means there are no differences between `dev` and `master`. `origin/master` is the remote tracking branch for `master`. It keeps track of where `master` was on the remote called `origin` the last time you ran `git fetch` (or `git pull` which does a `git fetch`); Git doesn't continuously know the state of the remotes, it only looks when you ask. Having `origin/master` and `master` pointing at the same commit says you haven't committed anything to `master` since the last time you looked at `origin`. In sum... * All those branches point `HEAD` which is what you have checked out. * `dev` is the currently checked out branch. * `dev` and `master` are at the same commit, they have no differences. * Nothing has been added to `dev` nor `master` since your last `git fetch`. See also * [`gitglossary`](https://git-scm.com/docs/gitglossary) * [`gitrevisions`](https://git-scm.com/docs/gitrevisions) Upvotes: 2 [selected_answer]
2018/03/14
835
2,900
<issue_start>username_0: I made a java project. The project is....output log message and system.out.println message. just simple. So I changed into a jar file(the name is LinuxSample.jar). and I wrote a shell script to run this jar file. Look at this shell script. (speakee is package name and PrintLinux is main class name) ``` #!bin/bash CLASSPATH=/home/tangooc/TANGOOC/test/libs/* CLASSPATH="${CLASSPATH};/home/tangooc/TANGOOC/test/linux/LinuxSample.jar" java speakee.PrintLinux ``` this jar file and this shell script work in Window. but linux didn't work. I don't know why this is error message. ``` Could not find or load main class ```<issue_comment>username_1: `git log` (which is the underlying command for showing history) can annotate the history display in various ways. One of these is to show what branches reference a commit. Specifically the `--decorate` option "Print out the ref names of any commits that are shown" (branches and tags are two examples of refs). Upvotes: 2 <issue_comment>username_2: > > `HEAD -> dev, origin/master, origin/dev, master` > > > [That would really look something like this](https://i.stack.imgur.com/bJBMe.jpg): ``` commit aa1124b89f38eed793e2b9f2d2b2ba5d80a27a20 (HEAD -> dev, origin/master, origin/dev, master) Author: Some Person Date: Sat Apr 14 12:06:02 PDT 2018 ``` That is the result of `git log --decorate` or having `log.decorate` set to `short` in your config. It shows you anything which referrers to each commit (references are things like branches and tags). This is important information for understanding the log. This means that the local branches `dev` and `master`, plus the remote branches `origin/master` and `origin/dev` plus the special reference `HEAD` all point at commit `aa1124b89f38`. `HEAD` is itself a special reference pointing to the currently checked out commit. `HEAD -> dev` says `dev` is the currently checked out branch. Having `dev` and `master` at the same commit means there are no differences between `dev` and `master`. `origin/master` is the remote tracking branch for `master`. It keeps track of where `master` was on the remote called `origin` the last time you ran `git fetch` (or `git pull` which does a `git fetch`); Git doesn't continuously know the state of the remotes, it only looks when you ask. Having `origin/master` and `master` pointing at the same commit says you haven't committed anything to `master` since the last time you looked at `origin`. In sum... * All those branches point `HEAD` which is what you have checked out. * `dev` is the currently checked out branch. * `dev` and `master` are at the same commit, they have no differences. * Nothing has been added to `dev` nor `master` since your last `git fetch`. See also * [`gitglossary`](https://git-scm.com/docs/gitglossary) * [`gitrevisions`](https://git-scm.com/docs/gitrevisions) Upvotes: 2 [selected_answer]
2018/03/14
838
2,815
<issue_start>username_0: Can anyone explain to me why these two conditions produce different outputs (even different count() )? FIRST: ``` (df .where(cond1) .where((cond2) | (cond3)) .groupBy('id') .agg(F.avg(F.column('col1')).alias('name'), F.avg(F.column('col2')).alias('name')) ).count() ``` SECOND: ``` (df .groupBy('id') .agg(F.avg(F.when(((cond2) | (cond3))) & (cond1), F.column('col1'))).alias('name'), F.avg(F.when(((cond2) | (cond3)) & (cond1), F.column('col2'))).alias('name')) ).count() ```<issue_comment>username_1: `git log` (which is the underlying command for showing history) can annotate the history display in various ways. One of these is to show what branches reference a commit. Specifically the `--decorate` option "Print out the ref names of any commits that are shown" (branches and tags are two examples of refs). Upvotes: 2 <issue_comment>username_2: > > `HEAD -> dev, origin/master, origin/dev, master` > > > [That would really look something like this](https://i.stack.imgur.com/bJBMe.jpg): ``` commit aa1124b89f38eed793e2b9f2d2b2ba5d80a27a20 (HEAD -> dev, origin/master, origin/dev, master) Author: Some Person Date: Sat Apr 14 12:06:02 PDT 2018 ``` That is the result of `git log --decorate` or having `log.decorate` set to `short` in your config. It shows you anything which referrers to each commit (references are things like branches and tags). This is important information for understanding the log. This means that the local branches `dev` and `master`, plus the remote branches `origin/master` and `origin/dev` plus the special reference `HEAD` all point at commit `aa1124b89f38`. `HEAD` is itself a special reference pointing to the currently checked out commit. `HEAD -> dev` says `dev` is the currently checked out branch. Having `dev` and `master` at the same commit means there are no differences between `dev` and `master`. `origin/master` is the remote tracking branch for `master`. It keeps track of where `master` was on the remote called `origin` the last time you ran `git fetch` (or `git pull` which does a `git fetch`); Git doesn't continuously know the state of the remotes, it only looks when you ask. Having `origin/master` and `master` pointing at the same commit says you haven't committed anything to `master` since the last time you looked at `origin`. In sum... * All those branches point `HEAD` which is what you have checked out. * `dev` is the currently checked out branch. * `dev` and `master` are at the same commit, they have no differences. * Nothing has been added to `dev` nor `master` since your last `git fetch`. See also * [`gitglossary`](https://git-scm.com/docs/gitglossary) * [`gitrevisions`](https://git-scm.com/docs/gitrevisions) Upvotes: 2 [selected_answer]
2018/03/14
791
3,305
<issue_start>username_0: I'm trying to add notepad++ editor in git config using: `git config --global core.editor "C:\LegacyApp\Notepad++\notepad++.exe"` The error I'm getting is: `error: failed to write new configuration file H://.gitconfig.lock` I can't locate `.gitconfig.lock` file anywhere. I have already check the ownership rights. It's worth to point out that my `.gitconfig` file is empty, I guess this where the problem is coming from, but I have no more ideas how to solve it, except to reinstall my gitBash. Does anyone have any other thoughts of how can I overcome this problem?<issue_comment>username_1: (This should be a comment—it's a guide,not an answer—but I'm too verbose. :-) ) Git writes out a *new* configuration by copying the existing one to a temporary file, making whatever changes are required along the way. (This is a typical work-flow for such programs: read existing file, make substitutions, write out to temporary file as you work.) If *two* users, or two processes for a single user, could perhaps be running simultaneously with *both* trying to make changes, one process's changes could be lost. Say process A opens the original file and begins making changes while process B also opens the original file and begins making changes. Process A finishes first, replaces the original file with its temporary copy, and quits. Then process B finishes, replaces the original file with *its* temporary copy, and quits. Process A's changes are now *gone*. Git's solution to this is to use a *lock:* before making any changes, any process must obtain the lock, with the lock being available to only one process at a time. Git combines this with the temporary file itself, by leveraging off the idea that there's an operation that will create a file but fail if the file exists. So if the configuration file is named `H:/.gitconfig`, the *temporary and lock* file can be named `H:/.gitconfig.lock`. This file will automatically be on the same storage device as the configuration file, so at the end, when the temporary file is complete, Git can *rename* `H:/.gitconfig.lock` to `H:/.gitconfig`, which both unlocks the configuration *and* updates it to the new configuration at the same time. What this all means is that Git believes the configuration file is named `H:/.gitconfig`, and it needs permission to create-with-exclusion (the lock action) `H:/.gitconfig.lock`, but it does not have such permission. The name `H:` appears to be a Windows drive specifier. I avoid Windows and don't know if there's a reason you would have your personal Git configuration live in a Windows drive, nor where the Windows-drive permissions might be stored, but that's where to look. Either your configuration should be elsewhere (e.g., `H:/yourname`) so that permissions are easier to control, or you must fuss with whatever the permissions are on the drive itself, rather than any files within the drive. Upvotes: 1 <issue_comment>username_2: Run Git as an administrator. I had the exact same error trying to enable Git rerere. I'm running windows so I typed Git into the search to bring up the Git Bash program, right clicked and chose run as administrator. If that does not work try restarting to make sure no other programs write access to the file and try again. Upvotes: 3
2018/03/14
876
3,196
<issue_start>username_0: I feel like I'm close but something is amiss. I'm getting undefined method 'each' in my show page. My show has the following: ``` Name: <%= @region.name %> Location <%= @region.each do |region| %> <%= region.location.name %> <% end %> ``` In my Regions Controller I have the following: ``` class Admin::RegionsController < Admin::ApplicationController belongs_to_app :regions add_breadcrumb 'Regions', :admin_regions_path before_action :load_region, only: [:show, :edit, :update, :destroy] def index @regions = Region.ordered.paginate(page: params[:page]) @regions = @regions.search(params[:search]) if params[:search] respond_with @regions end def show respond_with @region end def new @region = Region.new respond_with @region end def create @region = Region.new(region_params) flash[:notice] = 'Region created successfully' if @region.save respond_with @region, location: admin_regions_path end def edit respond_with @region end def update flash[:notice] = 'Region updated successfully' if @region.update_attributes(region_params) respond_with @region, location: admin_regions_path end def destroy flash[:notice] = 'Region deleted successfully' if @region.destroy respond_with @region, location: admin_regions_path end private def load_region @region = Region.find_by!(id: params[:id]) end def region_params params.require(:region).permit(:name, location_ids:[]) end end ``` I attempted to update my Locations Controller to have: ``` def show @region = Region.find(params[:region_d]) @location = @region.location end ``` I end up getting the exact same undefined method issue being presented. Am I missing something that's way too obvious that should allow me to pull on the location name for display? Ultimately I just to see something like: Region: One Locations: Alpha, Beta, Gamma Probably need to put in something that delineates between the values that are being generated as well but that's a different issue.<issue_comment>username_1: You need to change your view to ``` <%= @region.locations.each do |location| %> <%= location.name %> <% end %> ``` @region is a single instance, not an array, so you can't use `each` on it Upvotes: 2 [selected_answer]<issue_comment>username_2: > > I'm getting undefined method 'each' in my show page > > > [I set `@region` in my show method and] I end up getting the exact same undefined method issue being presented > > > Take a closer look at the error message. It is `undefined method each for #:Region` (or something similar), isn't it? Value of your `@region` is not a collection and thus doesn't know method `.each`. To avoid this error, call `.each` on a corresponding collection (*whatever that might be in your app. Seems to be `@region.locations`*) Upvotes: 2 <issue_comment>username_3: So username_1 gave the right answer to this particular instance and Sergio schooled me on what I missed/how it operates. But I wanted to post out an answer that I ultimately used that took username_1's response a bit further to make the desired result that I posted about. ``` <%= @region.locations.map(&:name).join(', ') %> ``` Upvotes: 1
2018/03/14
803
2,916
<issue_start>username_0: I am totally new to Angular so please be forgiving:). I try to come up with an Overview Page (overview.component) which shows a list consisting of Product Ideas (idea.component). The code of the **Overview Component (overview.component)** looks as follows: ``` import {Component, TemplateRef} from '@angular/core'; import { IdeaComponent } from '../Components/ideaComponent/idea.component'; import { BsModalService } from 'ngx-bootstrap/modal'; import { BsModalRef } from 'ngx-bootstrap/modal/bs-modal-ref.service'; import { FormsModule } from '@angular/forms'; @Component({ selector: 'app-overview-component', templateUrl: './overview.component.html' }) export class OverviewComponent { modalRef: BsModalRef; newIdea = new IdeaComponent(); ideas = [ new IdeaComponent('Test Project', '<NAME>', new Date(2018, 3, 5)), new IdeaComponent('Test Project', '<NAME>', new Date(2018, 3, 5)), ]; constructor(private modalService: BsModalService) {} public openModal(template: TemplateRef) { this.modalRef = this.modalService.show(template); } } ``` The **HTML of the Overview Page** (overview.component.html) looks as follows: ``` Overview ======== ##### Create your own Product Idea: Some quick example text to build on the card title and make up the bulk of the card's content. Create new Product Idea #### Modal × Titel: Author: Close Save changes #### Current Product Ideas: ``` The code of the **Idea Component (idea.component)** looks as follows: ``` import {Component } from '@angular/core'; @Component({ selector: 'app-idea-component', templateUrl: './idea.html' }) export class IdeaComponent { constructor (public title: string = '', public author: string = '', public date: any = '') {} } ``` Angular now reports an error that it can't resolve all parameters for IdeaComponent. I would be very thankful for some support on this. Best regards<issue_comment>username_1: You try to pass your inputs to your IdeaComponent via the constructor, which causes the Angular DI system to look for providers for title, author and date. You need to use `Inputs` for [component interaction](https://angular.io/guide/component-interaction): ``` import { Component, Input } from '@angular/core'; @Component({ selector: 'app-idea-component', templateUrl: './idea.html' }) export class IdeaComponent { @Input() public title: string = ''; @Input() public author: string = ''; @Input() date: any = ''; } ``` Upvotes: 1 <issue_comment>username_2: Change IdeaComponent component to: ``` import {Component, Input } from '@angular/core'; @Component({ selector: 'app-idea-component', templateUrl: './idea.html' }) export class IdeaComponent { @Input() title: string = '' @Input() author: string = '' @Input() date: any = '' constructor () {} } ``` Upvotes: 0
2018/03/14
884
3,030
<issue_start>username_0: im working on django rest and angular this json array is comming from server ic contain category and subCategory values.. im trying to build a dynamic navbar so i want to arrange this data like `[ [web development] : ["subCat1","subcat2",....] [android development] : ["subCat1","subcat2",....] ]` to access category and its related subctegory i tried somthing like this : but it set only keys and value are empety ``` public categories = [[], [], [], []]; public data; for (let i = 0; i < this.data.length; i++) { if (this.data[i].cat_id != null) { this.categories[i][this.data[i].title] = []; } if (this.data[i].parent_id != null && this.data[i].parent_id == this.data[i].cat_id) { this.categories[i][this.data[i].title] = [this.data[i].title] } } ``` its server response ``` [ { "id": 5, "cat_id": 0, "parent_id": null, "title": "web development" }, { "id": 6, "cat_id": 1, "parent_id": null, "title": "android development" }, { "id": 7, "cat_id": null, "parent_id": 0, "title": "php" }, { "id": 8, "cat_id": null, "parent_id": 1, "title": "java" } ] ```<issue_comment>username_1: I don't know if I understand you correctly. I assume that the "data" variable contains the JSON and that the expected output would be: ``` { "web development": ["php"], "android development": ["java"] } ``` This could be achieved by first creating a "categories" object, this object will be used to get a "title" for a given "parent\_id". ``` const categories = data.reduce((acc, d) => { if (d.parent_id === null) { acc[d.cat_id] = d.title } return acc; }, {}); ``` Then create the stucture with ``` const structure = {}; data.forEach((d) => { if (d.parent_id === null) { structure[d.title] = []; } else { structure[categories[d.parent_id]].push(d.title); } }); ``` Structure now contains the data as I described earlier. Upvotes: 0 <issue_comment>username_2: Here is a bit of code with does what you want: ``` interface Categories { [title: string]: string[] } let categories: Categories = {}; data.filter(c => c.parent_id === null).map(c => <{ title: string; subcategories: string[] }>{ title: c.title, subcategories: data.filter(sc => sc.parent_id === c.cat_id).map(sc => sc.title) }).forEach(c => { categories[c.title] = c.subcategories; }); console.log(categories); ``` As you see, I define an interface. Then I create a temporal array with objects containing a title and its subcategories. Afterwards, I flatten that structure turning it into what you need. The output is: ``` { "web development": ["php"], "android development": ["java"] } ``` Upvotes: 2 [selected_answer]
2018/03/14
842
2,882
<issue_start>username_0: I have been getting this error on Nopcommerce all day, the site was workign normally before then all of a sudden started getting > > Error while running the 'Keep alive' schedule task. The remote server > returned an error: (500) Internal Server Error.System.Net.WebException: > > > The remote server returned an error: (500) Internal Server Error. > > at System.Net.WebClient.DownloadDataInternal(Uri address, WebRequest& > request) at System.Net.WebClient.DownloadString(Uri address) at > System.Net.WebClient.DownloadString(String address) at > Nop.Services.Common.KeepAliveTask.Execute() in > C:\Users\spadmin\Documents\WebStorm\CVLWeb\Libraries\Nop.Services\Common\KeepAliveTask.cs:line > 27 at Nop.Services.Tasks.Task.Execute(Boolean throwException, > Boolean dispose, Boolean ensureRunOnOneWebFarmInstance) in > C:\Users\spadmin\Documents\WebStorm\CVLWeb\Libraries\Nop.Services\Tasks\Task.cs:line > 163 > > > I have tried all the suggestion from the forum which include updating the KeepAlive link in Common/KeepAlive.cs, to create a file ina folder called KeepAlive in the Root. The WebSite was running fine until this morning when the error started showing up. I am running version 3.9 of NopCommerce on Azure Website.<issue_comment>username_1: I don't know if I understand you correctly. I assume that the "data" variable contains the JSON and that the expected output would be: ``` { "web development": ["php"], "android development": ["java"] } ``` This could be achieved by first creating a "categories" object, this object will be used to get a "title" for a given "parent\_id". ``` const categories = data.reduce((acc, d) => { if (d.parent_id === null) { acc[d.cat_id] = d.title } return acc; }, {}); ``` Then create the stucture with ``` const structure = {}; data.forEach((d) => { if (d.parent_id === null) { structure[d.title] = []; } else { structure[categories[d.parent_id]].push(d.title); } }); ``` Structure now contains the data as I described earlier. Upvotes: 0 <issue_comment>username_2: Here is a bit of code with does what you want: ``` interface Categories { [title: string]: string[] } let categories: Categories = {}; data.filter(c => c.parent_id === null).map(c => <{ title: string; subcategories: string[] }>{ title: c.title, subcategories: data.filter(sc => sc.parent_id === c.cat_id).map(sc => sc.title) }).forEach(c => { categories[c.title] = c.subcategories; }); console.log(categories); ``` As you see, I define an interface. Then I create a temporal array with objects containing a title and its subcategories. Afterwards, I flatten that structure turning it into what you need. The output is: ``` { "web development": ["php"], "android development": ["java"] } ``` Upvotes: 2 [selected_answer]
2018/03/14
507
1,704
<issue_start>username_0: I want to iterate through my table, check to see if the Quantity received is higher than the quantity expected for each row, and if so execute some code. The way I've done it below feels amateurish. ``` bool allRecieved = true; foreach (DataRow row in SubVwr.Tables[0].Tbl.Rows) { if(row["QtyRcvd"] < row["QtyPer"]) allRecieved = false; } if(allRecieved) // execute code ```<issue_comment>username_1: You can use LINQ, for better readability (presuming those are `int`-columns): ``` bool allRecieved = SubVwr.Tables[0].Tbl.AsEnumerable() .All(row => row.Field("QtyRcvd") >= row.Field("QtyPer")); ``` An advantage over your current loop is that this will stop as soon as one record doesn't match this condition. Your loop will continue until end without `break`. Upvotes: 3 [selected_answer]<issue_comment>username_2: This is a bit radical, but I'd start by ***not using `DataTable`***, and instead use a simple model: ``` public class SomeType { // I'd probably name these QuantityReceived etc, but... meh public int QtyRcvd {get;set;} public int QtyPer {get;set;} // ... etc } ``` Then I can very conveniently check properties etc. For example, to mirror [username_1's LINQ answer](https://stackoverflow.com/a/49280194/23354): ``` List someData = ... var allReceived = someData.All(x => x.QtyRcvd >= x.QtyPer); ``` so now all we need is to load `SomeType` from the DB, which is where ORMs and micro-ORMs excel. For example, with "Dapper", this would be: ``` string region = "North"; // just to show parameter usage var someData = connection.Query( @"select \* from SomeTable where Region=@region", new { region }).AsList(); ``` Upvotes: 1
2018/03/14
432
1,485
<issue_start>username_0: If a textfile contains a character, say space, as both a delimiter and part of text, how should we read the file using pandas read\_csv, read\_table or file read?<issue_comment>username_1: You can use LINQ, for better readability (presuming those are `int`-columns): ``` bool allRecieved = SubVwr.Tables[0].Tbl.AsEnumerable() .All(row => row.Field("QtyRcvd") >= row.Field("QtyPer")); ``` An advantage over your current loop is that this will stop as soon as one record doesn't match this condition. Your loop will continue until end without `break`. Upvotes: 3 [selected_answer]<issue_comment>username_2: This is a bit radical, but I'd start by ***not using `DataTable`***, and instead use a simple model: ``` public class SomeType { // I'd probably name these QuantityReceived etc, but... meh public int QtyRcvd {get;set;} public int QtyPer {get;set;} // ... etc } ``` Then I can very conveniently check properties etc. For example, to mirror [username_1's LINQ answer](https://stackoverflow.com/a/49280194/23354): ``` List someData = ... var allReceived = someData.All(x => x.QtyRcvd >= x.QtyPer); ``` so now all we need is to load `SomeType` from the DB, which is where ORMs and micro-ORMs excel. For example, with "Dapper", this would be: ``` string region = "North"; // just to show parameter usage var someData = connection.Query( @"select \* from SomeTable where Region=@region", new { region }).AsList(); ``` Upvotes: 1
2018/03/14
361
1,221
<issue_start>username_0: I'd like to have two side menus in a dropdown in the navbar. In order to accomplish that I'd like to use flexbox. However the words are wrapping and i would like them not to and instead grow to fit the width of the content. **Tip**: You'll need to expand the fiddle's rendering width to avoid the bootstrap collapse. [fiddle](https://jsfiddle.net/qprus35e/5/) ``` * [Dropdown](#) + [Left Side](#) + [Longer content on the left side](#) + [Log In](#) + [Sign Up](#) .flex-container { display: flex; justify-content: space-between; } ```<issue_comment>username_1: You need to override the bootstrap's default style on `.dropdown-menu` with something like, ``` .dropdown-menu { min-width: 300px; } ``` Hope this helps. Upvotes: 0 <issue_comment>username_2: I found the answer [here](https://stackoverflow.com/questions/15055040/dynamic-width-for-twitter-bootstrap-dropdown-menu). I was looking to avoid setting a fixed min-width ``` .dropdown-menu > li > a { white-space: nowrap; } ``` Upvotes: 1 <issue_comment>username_3: FYI.. you can accomplish this by applying the "text-nowrap" class available in Bootstrap 5 to the ul or div elements. ``` ``` Upvotes: -1
2018/03/14
901
2,951
<issue_start>username_0: CakePHP 3.x According to the [docs](https://book.cakephp.org/3.0/en/orm/query-builder.html) > > To limit the number of rows or set the row offset you can use the `limit()` and `page()` methods: > > > ``` // Fetch rows 50 to 100 $query = $articles->find() ->limit(50) ->page(2); ``` What this does is generates the equivalent of the following SQL query: ``` LIMIT 50 OFFSET 50 ``` I can't work out how you calculate the value 2 to pass to `page()`? As per the comment it says "fetch rows 50 to 100". But diving 100/50 to get 2 is not the answer. Assume you wanted to calculate the `page()` values for the following? ``` LIMIT 250 OFFSET 250 // page(2) LIMIT 250 OFFSET 500 // page(3) LIMIT 250 OFFSET 750 // page(4) LIMIT 250 OFFSET 1000 // page(5) ``` I've put in a comment - from doing some debugging - what the values of `page()` are to generate those `OFFSET` queries. Am I missing something here?? If we take the last example `LIMIT 250 OFFSET 1000`, where do you get a value of 5 for `page()`?? For clarification the PHP used for that last condition would look like this: ``` $query = $model->find()->limit(250)->page(5); ``` Then `debug($query);` will show that the actual SQL generated is ``` SELECT ... LIMIT 250 OFFSET 1000 ``` Given that many libraries (e.g. DataTables) would only provide you with the values for `LIMIT` (250) and `OFFSET` (1000), how are you supposed to calculate the `page()` value that Cake needs to produce the actual results from the database?<issue_comment>username_1: The answer, which was provided by @ndm in a comment, is that there is an `offset()` method. This method is not well documented - it doesn't appear in the Cookbook but does appear in the API Docs: <https://api.cakephp.org/3.5/class-Cake.Database.Query.html#_offset> So, the answer to the question is it can be done like this: ``` $query = $model->find()->limit(250)->offset(1000); ``` It is also worth noting that the Cookbook does not document every feature. There are some things that appear in the API Docs that are not in the Cookbook, so worth checking both. This is a good example of such a thing. Upvotes: 2 <issue_comment>username_2: It's probably too late to answer maybe, but recently I had similar issues while building pagination query and hence got to learn a bit about this. When we set the `limit($limit)->page($page)`, this will divide all the results in equal parts with each part having `$limit` number of results. So imagine this as multiple pages holding `$limit` number of results in each page. Then `$page` will be indirectly acting as offset for this query. For example `LIMIT 250 OFFSET 1000` will be - total results divided in parts of 250 results each. Then we want results from 1000th offset. Since offset indexing is 0-based, this offset will be in the 5th page - 1001th record. That means the value that should be given to `page()` will be 5. Upvotes: 1
2018/03/14
476
1,742
<issue_start>username_0: How I can check if event was fired programmatically? Namely I have an [`Ext.tree.Panel`](http://docs.sencha.com/extjs/6.2.1/classic/Ext.tree.Panel.html) and listen for it `selectionchange` event. How I can check within handler if event was fired manually by user (row click) or via `select()` method?<issue_comment>username_1: The answer, which was provided by @ndm in a comment, is that there is an `offset()` method. This method is not well documented - it doesn't appear in the Cookbook but does appear in the API Docs: <https://api.cakephp.org/3.5/class-Cake.Database.Query.html#_offset> So, the answer to the question is it can be done like this: ``` $query = $model->find()->limit(250)->offset(1000); ``` It is also worth noting that the Cookbook does not document every feature. There are some things that appear in the API Docs that are not in the Cookbook, so worth checking both. This is a good example of such a thing. Upvotes: 2 <issue_comment>username_2: It's probably too late to answer maybe, but recently I had similar issues while building pagination query and hence got to learn a bit about this. When we set the `limit($limit)->page($page)`, this will divide all the results in equal parts with each part having `$limit` number of results. So imagine this as multiple pages holding `$limit` number of results in each page. Then `$page` will be indirectly acting as offset for this query. For example `LIMIT 250 OFFSET 1000` will be - total results divided in parts of 250 results each. Then we want results from 1000th offset. Since offset indexing is 0-based, this offset will be in the 5th page - 1001th record. That means the value that should be given to `page()` will be 5. Upvotes: 1
2018/03/14
920
3,734
<issue_start>username_0: I am new to XML - XSLT transformation. I have the following XML file and the corresponding XSLT file but for some reason, it does not display anything pass the h2, which is an HTML. Any help will be great! I have checked the namespace to make sure it matches the XML document, and I also tried running code on the W3 site as well. Both files are below: XML ``` xml version="1.0" encoding="UTF-8"? Objective Adaptable Data Analyst skilled in recording, interpreting and analyzing data in a fast-paced environment. Advanced proficiency in all aspects of Excel. Experienced in preparing detailed documents and reports while managing complex internal and external data analysis responsibilities. Core Qualifications Data and statistical analysis PC/Mac SAS and MS Excel proficiency Report generation Time management Project management Interpersonal communication Data Analyst 9/1/2012 Present New Parkland Corporation New Parkland CA Interpret data from primary and secondary sources using statistical techniques and provide ongoing reports. Compile and validate data; reinforce and maintain compliance with corporate standards. Develop and initiate more efficient data collection procedures. Working with managing leadership to prioritize business and information requirements. Data Analyst 6/1/2011 5/1/2012 Lake City Industries Lake City CA Extracted, compiled and tracked data, and analyzed data to generate reports. Worked with other team members to complete special projects and achieve project deadlines. Developed optimized data collection and qualifying procedures. Leveraged analytical tools to develop efficient system operations. Data Analyst 7/1/2010 2/1/2011 New Parkland Data Research Center New Parkland CA Performed daily data queries and prepared reports on daily, weekly, monthly, and quarterly basis Used advanced Excel functions to generate spreadsheets and pivot tables Education Bachelor of Science Computer Science New Parkland Business College New Parkland CA 2014 Masters of Science Finance University of California New Parkland CA 2010 ``` XSLT ``` xml version="1.0" encoding="UTF-8"? Alphaba Resume - Long version ----------------------------- ```<issue_comment>username_1: You're iterating through `objective` elements, and trying to output the value of another `objective` element within each, but there aren't any. Should you be doing ``` ``` instead? Upvotes: 0 <issue_comment>username_2: My XSL will take care of converting the output into text for the tags. The XSL is more flexible in case someone embeds an objective somewhere in the resume that you didn't expect. ``` ``` I use `name()` because the namespace isn't well defined so `obj_body` needs to be evaluated as a string. Upvotes: 0 <issue_comment>username_3: You mention that you have "checked the namespace to make sure it matches the XML document", but there is no reference at all to the namespace "<https://swe.umbc.edu/~asacko1/xs>" in your XSLT. In your XML, "<https://swe.umbc.edu/~asacko1/xs>" is the default namespace. In XSLT 1.0, the way to handle these is to declare it with a namespace prefix, and use that prefix throughout the XSLT. Try this XSLT ``` Alphaba Resume - Long version ----------------------------- ``` Note that the choice of prefix here (swe) is arbitrary. It could be anything, just as long as the namespace URI matches. If you could use XSLT, things are slightly simpler when it comes to default namespaces, as you could use `xpath-default-namespace` instead, and not have to worry about adding a prefix: ``` Alphaba Resume - Long version ----------------------------- ``` Upvotes: 1
2018/03/14
603
2,146
<issue_start>username_0: I use laravel 5.6 I use <https://laravel.com/docs/5.6/validation#form-request-validation> to validation server side My controller like this : ``` php .... use App\Http\Requests\UserUpdateRequest; class UserController extends Controller { ... public function update(UserUpdateRequest $request) { // dd($request-all()); } } ``` Before run statement in the update method, it will call UserUpdateRequest to validation server side The validation like this : ``` namespace App\Http\Requests; use Illuminate\Foundation\Http\FormRequest; class UserUpdateRequest extends FormRequest { .... public function rules() { dd($this->request->all()); return [ 'name' => 'required|max:50', 'gender' => 'required', 'birth_date' => 'required', 'address' => 'required', 'status' => 'required' ]; } } ``` The result of dd($this->request->all()); like this : ``` Array ( [selected_data] => {"name":"agis","gender":"2","birth_date":"2018-03-13","address":"london"} ) ``` How can I validation if the data is object array like that?<issue_comment>username_1: You can use a dot notation like so: ``` public function rules() { return [ 'selected_data.name' => 'required|max:50', 'selected_data.gender' => 'required', 'selected_data.birth_date' => 'required', 'selected_data.address' => 'required', 'selected_data.status' => 'required', ]; } ``` Read more about it here: [Validating Array](https://laravel.com/docs/5.6/validation#validating-arrays). Hope this helps. Upvotes: 4 <issue_comment>username_2: I would add the \* because if you pass multiple objects you need to verify them all. So like this: ``` public function rules() { return [ 'selected_data.*.name' => 'required|max:50', 'selected_data.*.gender' => 'required', 'selected_data.*.birth_date' => 'required', 'selected_data.*.address' => 'required', 'selected_data.*.status' => 'required', ]; } ``` Upvotes: 2
2018/03/14
2,574
5,571
<issue_start>username_0: If someone had this issue, I will be sincerely thankful for your help. I'm having this error sending local email with hotmail: ``` Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/huey/consumer.py", line 169, in process_task task_value = self.huey.execute(task) File "/usr/local/lib/python3.6/site-packages/huey/api.py", line 357, in execute result = task.execute() File "/usr/local/lib/python3.6/site-packages/huey/api.py", line 842, in execute return func(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/huey/contrib/djhuey/__init__.py", line 108, in inner return fn(*args, **kwargs) File "/code/app/tasks.py", line 89, in send_email email_message.send() File "/usr/local/lib/python3.6/site-packages/django/core/mail/message.py", line 348, in send return self.get_connection(fail_silently).send_messages([self]) File "/usr/local/lib/python3.6/site-packages/django/core/mail/backends/smtp.py", line 111, in send_messages sent = self._send(message) File "/usr/local/lib/python3.6/site-packages/django/core/mail/backends/smtp.py", line 127, in _send self.connection.sendmail(from_email, recipients, message.as_bytes(linesep='\r\n')) File "/usr/local/lib/python3.6/smtplib.py", line 888, in sendmail raise SMTPDataError(code, resp) smtplib.SMTPDataError: (432, b'4.3.2 STOREDRV.ClientSubmit; sender thread limit exceeded [Hostname=DM5PR19MB1050.namprd19.prod.outlook.com]') ERROR process_task Unhandled exception in worker thread Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/huey/consumer.py", line 169, in process_task task_value = self.huey.execute(task) File "/usr/local/lib/python3.6/site-packages/huey/api.py", line 357, in execute result = task.execute() File "/usr/local/lib/python3.6/site-packages/huey/api.py", line 842, in execute return func(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/huey/contrib/djhuey/__init__.py", line 108, in inner return fn(*args, **kwargs) File "/code/app/tasks.py", line 89, in send_email email_message.send() File "/usr/local/lib/python3.6/site-packages/django/core/mail/message.py", line 348, in send return self.get_connection(fail_silently).send_messages([self]) File "/usr/local/lib/python3.6/site-packages/django/core/mail/backends/smtp.py", line 111, in send_messages sent = self._send(message) File "/usr/local/lib/python3.6/site-packages/django/core/mail/backends/smtp.py", line 127, in _send self.connection.sendmail(from_email, recipients, message.as_bytes(linesep='\r\n')) File "/usr/local/lib/python3.6/smtplib.py", line 888, in sendmail raise SMTPDataError(code, resp) smtplib.SMTPDataError: (554, b'5.2.0 STOREDRV.Submission.Exception:SendAsDeniedException.MapiExceptionSendAsDenied; Failed to process message due to a permanent exception with message Cannot submit message. 16.55847:050B0000, 17.43559:0000000094000000000000000000000000000000, 20.52176:140F248214004010F1030000, 20.50032:140F248285174010F1030000, 0.35180:0A00B681, 255.23226:F1030000, 255.27962:0A000000, 255.27962:0E000000, 255.31418:0A00F784, 16.55847:EC000000, 17.43559:0000000068010000000000000000000000000000, 20.52176:140F2482140000100A00F736, 20.50032:140F24828517001181170000, 0.35180:00000000, 255.23226:00000000, 255.27962:0A000000, 255.27962:32000000, 255.17082:DC040000, 0.27745:140F2482, 4.21921:DC040000, 255.27962:FA000000, 255.1494:A4010000, 0.37692:05000780, 0.37948:00000600, 5.33852:00000000534D545000000000, 4.56248:DC040000, 7.40748:010000000000010B32303A44, 7.57132:00000000000000003A346638, 1.63016:32000000, 4.39640:DC040000, 8.45434:0000060075AF4541000000000000000032000000, 5.10786:0000000031352E32302E303534382E3032303A444D35505231394D42313035303A34663832663161652D373465382D343830632D623637302D34326236313339313335383200401005000780, 255.1750:0A00FC83, 255.31418:41010000, 0.22753:0A001E85, 255.21817:DC040000, 4.60547:DC040000, 0.21966:4B010000, 4.30158:DC040000 [Hostname=DM5PR19MB1050.namprd19.prod.outlook.com]') ``` That is my configuration on settings.py I try changing the port to 587, try with gmail account but doesn't work ``` EMAIL_USE_TLS = True EMAIL_HOST = 'smtp-mail.outlook.com' EMAIL_HOST_USER = '<EMAIL>' EMAIL_HOST_PASSWORD = '<PASSWORD>' EMAIL_PORT = 25 EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' ```<issue_comment>username_1: It's probably missing "Email From" variable either in settings.py or your view. Try this: ``` EMAIL_USE_TLS = True EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST = 'smtp.live.com' EMAIL_HOST_USER = '<EMAIL>' DEFAULT_FROM_EMAIL = '<EMAIL>' EMAIL_FROM = '<EMAIL>' EMAIL_HOST_PASSWORD = '<PASSWORD>' EMAIL_PORT = 587 ``` `DEFAULT_FROM_EMAIL` is used in stock Django views, but if you have your own custom one, then you can import those variables as well: ``` from django.conf import settings send_mail( mail_subject, message, settings.EMAIL_FROM, [to_email], fail_silently=False, ) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: 1. This can be solved very easily by adding the following line in **settings.py**: ``` EMAIL_HOST_USER = "<EMAIL>" DEFAULT_FROM_EMAIL = "<EMAIL>" ``` Make sure that both `EMAIL_HOST_USER` and `DEFAULT_FROM_EMAIL` is same email Id. 2. `send_mail` also will work perfectly but there is no facility to add attachment in that option. Upvotes: -1
2018/03/14
559
1,702
<issue_start>username_0: I am performing calculations on 2 numbers like below : ``` No1 = 263 No2 = 260 Decimal places = 2 Expected output = 98.86 ``` **Code :** ``` decimal output = 0; int decimalPlaces = 2; if (No1 > 0) output = ((100 * Convert.ToDecimal(((No2 * 100) / No1))) / 100); output = TruncateDecimal(output, decimalPlaces); // 98 public static decimal TruncateDecimal(decimal value, int precision) { decimal step = (decimal)Math.Pow(10, precision); decimal tmp = Math.Truncate(step * value); return tmp / step; } ``` Above code renders output = 98 When i divide 263/260 in calculator i get 0.988593 **decimalPlaces = 2 :** Take 2 digits after decimal places **decimalPlaces = 2 :** Round off will also take from this position after 2 decimal places i.e round off should take from 593 rendering 98.86 ``` No1 = 117 No2 = 120 decimal places = 2 Expected Output = 97.50 ``` Can anybody please me with this?<issue_comment>username_1: The problem is integer division. When you divide an integer by an integer, you will get an integer. You need to cast the values (at minimum one of them) to a decimal ``` output = Math.Round((100 * ((decimal)No2/(decimal)No1)),2); ``` Upvotes: 1 <issue_comment>username_2: try to change some parenthesis positions, from: ``` output = ((100 * Convert.ToDecimal(((No2 * 100) / No1))) / 100); ``` to ``` output = ((100 * ((Convert.ToDecimal(No2 * 100) / No1))) / 100); ``` In your code the division is calculated **before** decimal conversion, so you convert only integer result of the division. Besides use Math.Round instead of Math.Truncate, in order to get your desired result. Upvotes: 0
2018/03/14
571
1,810
<issue_start>username_0: I have been using Visual Basic and Visual Studio to create a nice GUI with severals tools for the print tech guys here. In this tool I would like the technicans to choose the correct .inf files and the populate a combobox with all the printer models listed in the file. I know how to allow them to choose the file, but I do not know how to make the Visual Basic program read out the printer models. In the inf files there could be several models listed like this: ``` ;WindowsXP(amd64) [Gestetner.NTamd64] "Gestetner MP C3002 PCL 6" "Gestetner MP C3502 PCL 6" [infotec.NTamd64] "infotec MP C3002 PCL 6" "infotec MP C3502 PCL 6" ``` Do you know how I can read out all the lines and then populate a combobox with only the printer models? Without the quotes of course. In Powershell i do this by sending the code below, but I don't understand how I get the same result in vb. ``` $Regex = Get-Content $InfPath\*.inf | Select-String -Pattern '"[A-z0-9 ]{1,}"' Foreach ($Lines in $Regex) { $Lines = $Lines.Line.Split("""")[1] $Lines | Format-List } ```<issue_comment>username_1: The problem is integer division. When you divide an integer by an integer, you will get an integer. You need to cast the values (at minimum one of them) to a decimal ``` output = Math.Round((100 * ((decimal)No2/(decimal)No1)),2); ``` Upvotes: 1 <issue_comment>username_2: try to change some parenthesis positions, from: ``` output = ((100 * Convert.ToDecimal(((No2 * 100) / No1))) / 100); ``` to ``` output = ((100 * ((Convert.ToDecimal(No2 * 100) / No1))) / 100); ``` In your code the division is calculated **before** decimal conversion, so you convert only integer result of the division. Besides use Math.Round instead of Math.Truncate, in order to get your desired result. Upvotes: 0
2018/03/14
327
1,166
<issue_start>username_0: What is the right way of checking if a nested property exists? ```js if (openResult.notification) { if (openResult.notification.payload) { if (openResult.notification.payload.additionalData) { if (openResult.notification.payload.additionalData.sensorOpenWarning) { // now do sth with openResult.notification.payload.additionalData } } } } ```<issue_comment>username_1: The problem is integer division. When you divide an integer by an integer, you will get an integer. You need to cast the values (at minimum one of them) to a decimal ``` output = Math.Round((100 * ((decimal)No2/(decimal)No1)),2); ``` Upvotes: 1 <issue_comment>username_2: try to change some parenthesis positions, from: ``` output = ((100 * Convert.ToDecimal(((No2 * 100) / No1))) / 100); ``` to ``` output = ((100 * ((Convert.ToDecimal(No2 * 100) / No1))) / 100); ``` In your code the division is calculated **before** decimal conversion, so you convert only integer result of the division. Besides use Math.Round instead of Math.Truncate, in order to get your desired result. Upvotes: 0
2018/03/14
553
2,077
<issue_start>username_0: Please help me, I stuck at likely simple task. I'm learning to webdriverjs , so I wrote a small code to register an account on FitBit site at url: www.fitbit.com/signup, after input my email and password, there is an div popup show up and ask user to fill all required field. Problem comes here, I cannot 'sendkeys' or 'click' on First Name field, could you please help me out? my code is very simple: ``` const webdriver = require('selenium-webdriver'); const driver = new webdriver.Builder() .forBrowser('firefox') .build(); const By = webdriver.By; // ask the browser to open a page driver.navigate().to('https://www.fitbit.com/signup'); //Enter Email driver.findElement(By.className('field email')).sendKeys('<EMAIL>'); //Enter Password driver.findElement(By.className('field password')).sendKeys('<PASSWORD>'); //Check check box driver.findElement(By.id('termsPrivacyConnected')).click(); //Click Continue button driver.findElement(By.xpath("//button[@type='submit' and @tabindex='16']")).click(); //Input First Name driver.sleep(3000); driver.findElement(By.xpath('//input[@id="firstName"]')).click(); driver.findElement(By.xpath('//input[@id="firstName"]')).sendKeys('why not input'); ```<issue_comment>username_1: Depending on if this is a popup, a frame or another window you need a different SwitchTo() command. See: <http://toolsqa.com/selenium-webdriver/switch-commands/> or here: <http://learn-automation.com/handle-frames-in-selenium/> Now you added your HTML it is clear that that you don't have to Switch to fill in the popup fields. I translated your code to C# and use Chromedriver instead of Firefox and there where no problems filling in the Firstname and Lastname. So basicly nothing is wrong with your code. Are you getting any error messages? Upvotes: 1 <issue_comment>username_2: There is no window here, that's a tag, I posted its source code here for your reference ``` Tell Us About Yourself ======================== We use this information to improve accuracy. Name ``` Upvotes: 0
2018/03/14
621
2,193
<issue_start>username_0: Im getting this error ``` java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkState(ZLjava/lang/String;Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;)V at org.openqa.selenium.remote.service.DriverService.findExecutable(DriverService.java:124) at org.openqa.selenium.chrome.ChromeDriverService.access$000(ChromeDriverService.java:32) at org.openqa.selenium.chrome.ChromeDriverService$Builder.findDefaultExecutable(ChromeDriverService.java:137) at org.openqa.selenium.remote.service.DriverService$Builder.build(DriverService.java:339) at org.openqa.selenium.chrome.ChromeDriverService.createDefaultService(ChromeDriverService.java:88) at org.openqa.selenium.chrome.ChromeDriver.(ChromeDriver.java:123) ``` Im building a bot for social media , so Im using Selenium libraries, and the bot works fine on a Java Application , but when I copy the code to a Web Application where I have a Servlet listening to my Android application , when I run the servlet , and the servlet calls the code of the bot that use Selenium, it throws that error when it comes to that line ``` System.setProperty("webdriver.chrome.driver", "C:\\Users\\manue\\OneDrive\\Escritorio\\chromedriver.exe"); driver = new ChromeDriver(); ``` I had read that it can be becouse of the version of guava that I have , but my version of guava is updated , and I dont know why Im getting this error. In my maven pom.xml I have the dependencies like this ``` javax javaee-web-api 7.0 provided org.seleniumhq.selenium selenium-java 3.10.0 ``` Here there is a screenshot of the structure of my project. [screenshot of my project](https://i.stack.imgur.com/ydaiS.png)<issue_comment>username_1: Looks like a classpath issue. What version of guava-X.jar do you have in our classpath? (most probably in WEB-INF/lib of webapp). Possible solution is to declare latest version of guava in your pom.xml explicitly. For 3.10.0 of selenium, you need: ``` com.google.guava guava 24.0-jre ``` Upvotes: 1 <issue_comment>username_2: I found a solution to a possibly related error message when I removed the Google Collections jar from the lib path. (Source) Upvotes: 2
2018/03/14
715
2,431
<issue_start>username_0: I have 2 tables, parent and child. I am making a procedure which will take id as input and give the output from both the tables. If a row is not found in table 2 it will give an exception. For this, I used a cursor for select statement and passed it as out parameter in my procedure, but I am confused what to give condition in the cursor's select statement so that it will check the id which is provided as input and display the rows from both the tables.[screen shot of my code](https://i.stack.imgur.com/ABGbf.png) CODE: ``` create or replace procedure stu_proc (st_id in student_main.stu_id%type, st_cur out sys_refcursor) is cursor cur_st(st_id number) is select * from student_main sm join student_details sd on sm.id = sd.st_id where sm.id = st_id; st_id cur_st%rowtype; begin open cur_st; loop fetch cur_st into st_cur; exit when cur_st%notfound; end loop; close cursor; exception when no_data_found then dbms_output.put_line('Student details not found'); end; ```<issue_comment>username_1: you just need to pass the parameter to the cursor :) ``` cursor cur_st(stud_id number) is select * from student_main sm join student_details sd on sm.id = sd.st_id where sm.id = stud_id; ``` and on calling it ``` stu_proc(:stud_id, s_cur); ``` Upvotes: 0 <issue_comment>username_2: > > yes i can join but my main concern is to join only those rows whose > input will be provided by user as student id(used as IN parameter in > the procedure) > > > I can see few issues with your code. 1) Passing parameters to cursor is not done, so the input value you passed is actaully not used while evaluating the cursor. 2) This variable declaration is looking wrong `st_id cur_st%rowtype`. It should be like `st_cur cur_st%rowtype`. Also note that since you are already using s `sys_refcursor` for retruning the result of the query, there is no need to open another explicit cursor. You code can be modified as below: Please try below. ``` CREATE OR REPLACE PROCEDURE stu_proc( col_st_id IN student_main.id%type, st_cur OUT sys_refcursor) IS BEGIN OPEN st_cur for SELECT sm.* FROM student_main sm JOIN student_details sd ON sm.id = sd.st_id WHERE sm.id = col_st_id; EXCEPTION WHEN no_data_found THEN dbms_output.put_line('Student details not found'); END; ``` Upvotes: 2 [selected_answer]
2018/03/14
1,682
6,280
<issue_start>username_0: I've been using Jupyter Notebooks for a couple of years now. I've just headed over to Jupyter Lab, but I've found the lack of shortcuts to be a burden. For example, I noticed that I can search for commands in the left hand palette. **But I can't seem to easily bind them to a keyboard shortcut. Is this even possible?** For example, I want to collapse the current cell output with "O" and collapse all code cells with "Shift O". [![enter image description here](https://i.stack.imgur.com/c8cZr.png)](https://i.stack.imgur.com/c8cZr.png)<issue_comment>username_1: You should edit the settings file in Settings/Keyboard Shortcuts. Here : [![Jupyter lab keyboard shortcut](https://i.stack.imgur.com/RZ8AH.png)](https://i.stack.imgur.com/RZ8AH.png) There you can specify any custom shortcut that you would like! Upvotes: 1 <issue_comment>username_2: This question is answered on GitHub [here](https://github.com/jupyterlab/jupyterlab/issues/4089). You can also look [here](https://github.com/jupyterlab/jupyterlab/blob/master/packages/notebook-extension/src/index.ts#L224) for the correct `command` names to enter in your keyboard shortcut user overrides because they are not always the same as what is shown in the Commands side-bar. The following are some that I use: ```js { "shortcuts": [ { "command": "notebook:hide-cell-outputs", "keys": [ "O" ], "selector": ".jp-Notebook:focus" }, { "command": "notebook:show-cell-outputs", "keys": [ "O", "O" ], "selector": ".jp-Notebook:focus" }, { "command": "notebook:hide-all-cell-outputs", "keys": [ "Ctrl L" ], "selector": ".jp-Notebook:focus" }, { "command": "notebook:hide-all-cell-code", "keys": [ "Shift O" ], "selector": ".jp-Notebook:focus" } ] } ``` which allows you to hide a cell output by pressing `O` once and showing the cell output by pressing `O` twice. The last one collapses all cell code with `Shift + O` as you requested. Upvotes: 6 [selected_answer]<issue_comment>username_3: I use these settings to bind the actions to move a cell up/down to Ctrl + Up/Down: ``` { // Move cell up "notebook:move-cell-up": { "selector": ".jp-Notebook:focus", "command": "notebook:move-cell-up", "keys": [ "Ctrl ArrowUp" ] }, // Move cell down "notebook:move-cell-down": { "selector": ".jp-Notebook:focus", "command": "notebook:move-cell-down", "keys": [ "Ctrl ArrowDown" ] } } ``` Upvotes: 3 <issue_comment>username_4: username_3 and username_1's answers above combined worked for me with minor modification for Mac. I hope this step-by-step iteration is helpful for someone like me who's a baby programmer. To summarize: 1. Open Advanced Settings Editor under the Settings tab, or `command ,` in Mac. 2. Navigate to Keyboard Shortcuts. You should see the screen username_1 answered with. 3. Use username_3's codes, however making one change in the key binding as `Ctrl Arrowup` is reserved in Mac to view all running applications (if you have it set up that way). Similarly, `Shift Arrowup` is for selecting multiple cells. As a result, I opted for `Alt Arrowup`. Notice the key on your Mac keyboard says alt/option. You have to refer to it as `Alt` to work. There you have it. Copy the codes below to User Overrides which is the right pane. 4. Re-open your notebook and test if it works as intended. 5. You can customize more keys in this fashion as long as it is defined [here](https://github.com/jupyterlab/jupyterlab/blob/master/packages/notebook-extension/src/index.ts#L224) on GitHub. For the most part, all that you need are the command IDs starting line 72. ``` { // Move cell up "notebook:move-cell-up": { "selector": ".jp-Notebook:focus", "command": "notebook:move-cell-up", "keys": [ "Alt ArrowUp" ] }, // Move cell down "notebook:move-cell-down": { "selector": ".jp-Notebook:focus", "command": "notebook:move-cell-down", "keys": [ "Alt ArrowDown" ] } } ``` Upvotes: 2 <issue_comment>username_5: If you cannot save the "User Preferences" settings and get a syntax error > > [additional property error] command is not a valid property > > > you have probably missed to nest within the "shortcuts" list, as described [here](https://github.com/jupyterlab/jupyterlab/issues/6865). Additionally, to override an old setting you do the following, using Activate Next Tab and Activate Previous Tab as examples: ``` { "shortcuts": [ { "command": "application:activate-next-tab", "keys": [ "Ctrl Shift ]" ], "selector": "body", "disabled": true // disable old setting }, { "command": "application:activate-previous-tab", "keys": [ "Ctrl Shift [" ], "selector": "body", "disabled": true // disable old setting }, { "command": "application:activate-next-tab", "keys": [ "Ctrl 1" // enable new shortcut key ], "selector": "body" }, { "command": "application:activate-previous-tab", "keys": [ "Ctrl 2" // enable new shortcut key ], "selector": "body" } ] } ``` Now you can click save and refresh your browser for the new setttings to take effect. Upvotes: 1 <issue_comment>username_6: On keyboards shortcuts of advance settings this code works fine for moving cells up and down ``` { // Move cell up "shortcuts": [ { "selector": ".jp-Notebook:focus", "command": "notebook:move-cell-up", "keys": [ "Alt ArrowUp" ] }, // Move cell down { "selector": ".jp-Notebook:focus", "command": "notebook:move-cell-down", "keys": [ "Alt ArrowDown" ] } ] } ``` [![move cell up and down](https://i.stack.imgur.com/yxnBp.png)](https://i.stack.imgur.com/yxnBp.png) Upvotes: 4
2018/03/14
1,055
4,010
<issue_start>username_0: **Criteria:** I'm trying to connect to a secured web service API called BigCommerce using GWT RequestBuilder. This is my entry point: ``` public class GwtTest implements EntryPoint { String url = "http://my-url-api/api/v2/products.xml"; // not the original url i'm using @Override public void onModuleLoad() { url = URL.encode(url); RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, url); builder.setHeader("Authorization", "Basic XXX"); // I generated this from Postman app on Chrome where things work perfectly builder.setHeader("Access-Control-Allow-Credentials", "true"); builder.setHeader("Access-Control-Allow-Origin", "http://127.0.0.1:8888/"); builder.setHeader("Access-Control-Allow-Methods", "POST, GET, UPDATE, OPTIONS"); builder.setHeader("Access-Control-Allow-Headers", "x-http-method-override"); builder.setHeader("Content-Type", "application/xml"); try { builder.sendRequest(url, new RequestCallback() { @Override public void onResponseReceived(Request request, Response response) { RootPanel.get().add(new HTML("Success: "+response.getText())); } @Override public void onError(Request request, Throwable exception) { RootPanel.get().add(new HTML("Failure (Response Error): "+exception)); } }); } catch (RequestException e) { RootPanel.get().add(new HTML("Failure Request Exception: "+e)); } } } ``` **Errors encountered:** I encounter the Same Origin Policy error at first: [![enter image description here](https://i.stack.imgur.com/IdPc7.jpg)](https://i.stack.imgur.com/IdPc7.jpg) Then After I disabled CORS on my browser I get the Perflight error: [![enter image description here](https://i.stack.imgur.com/rCaoJ.jpg)](https://i.stack.imgur.com/rCaoJ.jpg) **Work-around:** I was able to get results by disabling web security on Chrome but I don't think it's the right solution. **Trivial note:** Please guide me on this one, guys, because I'm new to GWT and BigCommerce thanks.<issue_comment>username_1: You could include a servlet in your webapp that acts as a proxy between the client and BigCommerce. An alternative is to run a reverse proxy such as Apache httpd that makes requests to the BigCommerce server appear to be on the same host as your webapp. Here is an example of an Apache httpd config file that will act as a reverse proxy for your webapp and an external service on a different host. To the browser both the webapp and the external service will appear to be running on the same host, which is what your want. ``` # file: /etc/httpd/conf.d/mywebapp.conf ProxyPreserveHost On # Forward requests to path /SomeExternalService to an external service ProxyPass /SomeExternalService http://externalhost/ # Forward all other requests to a local webserver hosting your webapp, # such as Tomcat listening on port 8081 ProxyPass / http://127.0.0.1:8081/ ProxyPassReverse / http://127.0.0.1:8081/ ``` Upvotes: 0 <issue_comment>username_2: Use a proxy servlet. This is also the solution the GWT Openlayers wrappers uses. <https://github.com/geosdi/GWT-OpenLayers/blob/c3becee0cdd5eefdc40b18e4999c2744dc23363a/gwt-openlayers-server/src/main/java/org/gwtopenmaps/openlayers/server/GwtOpenLayersProxyServlet.java> Upvotes: 1 <issue_comment>username_3: Based on Rob Newton answer, if your application is pure front end, you can host your files on nginx and add some proxy\_pass directive to your configuration, for example: ``` location ~* ^/bigcommerce/(.*) { proxy_pass http://api.bigcommerce.com/$1$is_args$args; } ``` So whenever you call `http://hostaddress/bigcommerce/something`, this will be forwared to `http://api.bigcommerce.com/something`. Headers are not forwared in this config, you can add more directives for that. Upvotes: 1
2018/03/14
1,516
5,973
<issue_start>username_0: I have a Workday studio integration to send a GET request to a vendor's API using an HTTP component, but I'm receiving the error below. The vendor doesn't have a username/password to connect. I have to connect using a token. Does anyone know how to make this work from Studio to GET data? **Reason: org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'http-token-auth' is defined** I have sent the request in many different ways: hard coding the URL with the token, setting headers with the token. Below are my different attempts. [![enter image description here](https://i.stack.imgur.com/2Jjzq.jpg)](https://i.stack.imgur.com/2Jjzq.jpg) I’m not sure what Http authorization this is supposed to use. There is no username/password, just a token and a URL to post using CURL. Below is what studio looks like with the HTTP properties. [![enter image description here](https://i.stack.imgur.com/yvTZx.png)](https://i.stack.imgur.com/yvTZx.png) Below is what is set on the Header. [![enter image description here](https://i.stack.imgur.com/NEhDR.png)](https://i.stack.imgur.com/NEhDR.png) Also, I'm able to GET data using SoapUI. Below is a snip of the request in SoapUI. [![enter image description here](https://i.stack.imgur.com/z8ZeY.png)](https://i.stack.imgur.com/z8ZeY.png) Below is the JSON raw request in SoapUI that is successful in getting data from the API. [![enter image description here](https://i.stack.imgur.com/qlVyV.png)](https://i.stack.imgur.com/qlVyV.png) Any help is much appreciated!! Thank you, -Remo<issue_comment>username_1: To preface; I'm not familiar with Workday Studio, and there don't seem to be any public docs, so there may be some nuance here that this answer misses. Summary ------- Workday, your code, or possibly some library being used is referencing a bean (see Spring docs: [Core Technologies](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html)) that does not exist or can not be found. If you're not writing any Java code whatsoever here, it's almost certainly either a configuration issue or bug in Workday Studio. Below are some observations based on the information you've provided. But first, a wild guess. Wild Guess ---------- It seems likely that Workday is handling this a little differently than cURL or SoapUI. cURL and SoapUI are doing something like the following: * Send GET request to URL with params, and include API key in header * Server sends desired response However, it sounds like Workday is doing something more like: * Send GET request assuming a pre-auth scenario, using challenge-type: 'token' * Server responds with the correct auth-type that its framework (presumably Rails) uses for tokens; 'http-token-auth' * Workday (wrongly) assumes that the server is using the Spring framework, and tries to load the correct auth-type bean based on that response * Spring framework barfs because there's no such bean I imagine there's some way to get Workday to play nicely with a standard REST API, and just supply the API key to the vendor's server as it expects, rather than trying to do a challenge/response. If this isn't it, there are some more weedy possibilities below. Odd Bean Name ------------- The bean name specified in the error is `http-token-auth`, which is in kebab-case. The convention for naming beans is (lower-) camelCase, so wherever that's specified may have just used the wrong casing. This could be in the Workday Studio configuration, the XML config file, or some custom code you've written, if any. Configuration ------------- If the bean name is correct, then there's likely some other configuration issue. Spring can implicitly detect candidate components by scanning the classpath (see Spring docs: [Classpath scanning and managed components](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#beans-classpath-scanning)) or loading it from the project XML. The issue could be: * The build path is wrong (see [this answer by esaj](https://stackoverflow.com/a/9239603) if you're unfamiliar) * The classpath is wrong, so Spring just doesn't see it. This seems like a Workday-specific config in this case. * The bean is in the project XML, but nested. In that case, it'd only be accessible to the enclosing bean. One solution to this is to activate the corresponding profile. * A packaging issue; if the bean isn't being included in the resulting deployed jar, then there will be issues. [This solution by dawrutowicz](https://stackoverflow.com/a/12957419) should apply in a number of cases. * Project configuration; all the settings in your screenshots look exactly correct and should work fine, so there might be something hidden in your project settings Bug in Workday Studio --------------------- This seems somewhat less likely, but is always a possibility. If you haven't written any Java code whatsoever, then there's something either in the Workday code that's serving up this unexpected 'http-token-auth' or inappropriately accepting it from somewhere else and trying to load up a bean using it. Final Thoughts -------------- Since you're trying to work with a vendor's API, I'd strongly recommend you try to collaborate with one of the engineers there. Guaranteed, they have at least one engineer who has dealt with complicated integration issues before. They'll have more details about their API, and might be able to give you more direct input on any configuration/code you'd be able to share. Upvotes: 3 [selected_answer]<issue_comment>username_2: I had an identical error message reported when sending a Rest POST request to a 3rd party web service from Workday Studio using Bearer authentication. The resolution was to set the header output type to "message" instead of a "rootpart". It is not a bug in Workday Studio. [Workday Studio set header output type setting](https://i.stack.imgur.com/4EI55.png) Regards, username_2 Upvotes: 0
2018/03/14
1,305
3,341
<issue_start>username_0: I trying to delete 6,7 and 8th character for each line. Below is the file containing text format. Actual output.. ``` #cat test 18:40:12,172.16.70.217,UP 18:42:15,172.16.70.218,DOWN ``` Expecting below, after formatting. ``` #cat test 18:40,172.16.70.217,UP 18:42,172.16.70.218,DOWN ``` Even I tried with below , no luck ``` #awk -F ":" '{print $1":"$2","$3}' test 18:40,12,172.16.70.217,UP #sed 's/^\(.\{7\}\).\(.*\)/\1\2/' test { Here I can remove only one character } 18:40:1,172.16.70.217,UP ``` Even with cut also failed ``` #cut -d ":" -f1,2,3 test 18:40:12,172.16.70.217,UP ``` Need to delete character in each line like 6th , 7th , 8th Suggestion please<issue_comment>username_1: With GNU cut you can use the `--complement` switch to remove characters 6 to 8: ``` cut --complement -c6-8 file ``` Otherwise, you can just select the rest of the characters yourself: ``` cut -c1-5,9- file ``` i.e. characters 1 to 5, then 9 to the end of each line. With awk you could use substrings: ``` awk '{ print substr($0, 1, 5) substr($0, 9) }' file ``` Or you could write a regular expression, but the result will be more complex. For example, to remove the last three characters from the first comma-separated field: ``` awk -F, -v OFS=, '{ sub(/...$/, "", $1) } 1' file ``` Or, using sed with a capture group: ``` sed -E 's/(.{5}).{3}/\1/' file ``` Capture the first 5 characters and use them in the replacement, dropping the next 3. Upvotes: 4 <issue_comment>username_2: With awk : ``` echo "18:40:12,172.16.70.217,UP" | awk '{ $0 = ( substr($0,1,5) substr($0,9) ) ; print $0}' ``` Regards! Upvotes: 0 <issue_comment>username_3: The solutions below are generic and assume no knowledge of any format. They just delete character 6,7 and 8 of any line. **`sed`:** ``` sed 's/.//8;s/.//7;s/.//6' # from high to low sed 's/.//6;s/.//6;s/.//6' # from low to high (subtract 1) sed 's/\(.....\).../\1/' sed 's/\(.{5}\).../\1/' ``` > > `s/BRE/replacement/n` :: substitute `n`th occurrence of `BRE` with `replacement` > > > **`awk`:** ``` awk 'BEGIN{OFS=FS=""}{$6=$7=$8="";print $0}' awk -F "" '{OFS=$6=$7=$8="";print}' awk -F "" '{OFS=$6=$7=$8=""}1' ``` This is 3 times the same, removing the field separator `FS` let `awk` assume a field to be a character. We empty field `6,7` and `8`, and reprint the line with an output field separator `OFS` which is empty. **`cut`:** ``` cut -c -5,9- cut --complement -c 6-8 ``` Upvotes: 2 <issue_comment>username_4: If you are running on bash, you can use the string manipulation functionality of it instead of having to call awk, sed, cut or whatever binary: ``` while read STRING do echo ${STRING:0:5}${STRING:9} done < myfile.txt ``` ${STRING:0:5} represents the first five characters of your string, ${STRING:9} represents the 9th character and all remaining characters until the end of the line. This way you cut out characters 6,7 and 8 ... Upvotes: 0 <issue_comment>username_5: it's a structured text, why count the chars if you can describe them? ``` $ awk '{sub(":..,",",")}1' file 18:40,172.16.70.217,UP 18:42,172.16.70.218,DOWN ``` remove the seconds. Upvotes: 2 <issue_comment>username_6: Just for fun, perl, where you can assign to a substring ``` perl -pe 'substr($_,5,3)=""' file ``` Upvotes: 2
2018/03/14
369
1,168
<issue_start>username_0: I want to extract text from td tag containing br tags inside. ``` from bs4 import BeautifulSoup html = " This is a breakline |" soup = BeautifulSoup(html, 'html.parser') print(soup.td.string) ``` Actual Output: `None` Expected output: `This is a breakline`<issue_comment>username_1: This will give you what you are looking for: ``` print(soup.td.text) ``` This is for the specific `td` tag Otherwise you also have: ``` print(soup.text) ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: From Beautiful Soup [document](https://beautiful-soup-4.readthedocs.io/en/latest/#string): > > If a tag contains more than one thing, then it’s not clear what `.string` should refer to, so `.string` is defined to be None: > > > And if you want text part *([document](https://beautiful-soup-4.readthedocs.io/en/latest/#get-text))*: > > If you only want the text part of a document or tag, you can use the `get_text()` method. It returns all the text in a document or beneath a tag, as a single Unicode string: > > > So you can use following: ``` print(soup.get_text()) ``` For specific tag `soup.td.get_text()` Upvotes: 2
2018/03/14
392
1,340
<issue_start>username_0: In the following code, as none of the arguments is const, i can't understand why the second overload is called in the 3 following cases. ``` #include #include using namespace std; void ToLower( std::string& ioValue ) { std::transform( ioValue.begin(), ioValue.end(), ioValue.begin(), ::tolower ); } std::string ToLower( const std::string& ioValue ) { std::string aValue = ioValue; ToLower(aValue); return aValue; } int main() { string test = "test"; cout<<"Hello World" << endl; // case 1 cout << ToLower("test") << endl; // case 2 cout << ToLower(static\_cast(test)) << endl; // case 3 cout << ToLower(string(test)) << endl; ``` }<issue_comment>username_1: The reasoning is the return type is std::string for the second function but void for the first. std::cout << (void) << std::endl is not a valid set of operations. std::cout << (std::string) << std::endl is. If you return a std::string& from the first function you'd probably see #2 & #3 probably use your first function call. Upvotes: -1 <issue_comment>username_2: In all 3 cases you are creating a temporary `std::string`, this is an unnamed object, an R-value. R-values aren't allowed to bind to non-const l-value references (`T&`) and so only the overload taking `const std::string& ioValue` is valid. Upvotes: 4 [selected_answer]
2018/03/14
1,407
5,255
<issue_start>username_0: I am trying to learn synchronization. Got stuck here according to what I have learned the following code should give 8000 as the final result but I am getting a random result like below package threads; ``` import java.time.LocalDateTime; public class A implements Runnable { String name; static Integer j=0; A(String name){ this.name=name; } @Override public synchronized void run() { for(int i=1;i<=1000;i++){ synchronized(this){ A.j++; } } System.out.println(j); } package threads; public class MainClass { public static void main(String args[]){ Thread t1=new Thread(new A("i am thread A ")); Thread t2=new Thread(new A("i am thread B ")); Thread t3=new Thread(new A("i am thread C ")); Thread t4=new Thread(new A("i am thread D ")); Thread t5=new Thread(new A("i am thread E ")); Thread t6=new Thread(new A("i am thread F ")); Thread t7=new Thread(new A("i am thread G ")); Thread t8=new Thread(new A("i am thread H ")); t1.setPriority(Thread.MAX_PRIORITY); t8.setPriority(Thread.MIN_PRIORITY); t1.start(); t2.start(); t3.start(); t4.start(); t5.start(); t6.start(); t7.start(); t8.start(); try { t1.join(); t2.join(); t3.join(); t4.join(); t5.join(); t6.join(); t7.join(); t8.join(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } ``` still getting output like 1293 2214 1403 3214 4214 5214 6224 7037 can anyone explain to me how to achieve synchronization and what is going wrong here?<issue_comment>username_1: This will solve the issue. You have to `synchronize` using a shared lock to all the threads since you are incrementing a static field. Otherwise each object will have it's own lock and increment the static field in parallel leading to a race condition. That's why you are not getting correct value which is 8000 in this case. ``` package bookmarks; public class A implements Runnable { String name; static Integer j = 0; private static Object lock = new Object(); A(String name) { this.name = name; } @Override public void run() { for (int i = 1; i <= 1000; i++) { synchronized (lock) { A.j++; } } System.out.println(j); } } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: There are a couple of issues in the code. * Issue 1: Lock object added in `synchronized(..)` is not shared among all thread instances * Issue 2: `System.out.println(j);` line should be in the end after `t8.join();` otherwise, you will be given 8 times output. The rectified code ``` public class A implements Runnable { String name; static Integer j = 0; static Object lockObject = new Object(); A(String name) { this.name = name; } @Override public void run() { for (int i = 1; i <= 1000; i++) { synchronized (lockObject) { A.j++; } } } public static void main(String args[]) { Thread t1 = new Thread(new A("i am thread A ")); Thread t2 = new Thread(new A("i am thread B ")); Thread t3 = new Thread(new A("i am thread C ")); Thread t4 = new Thread(new A("i am thread D ")); Thread t5 = new Thread(new A("i am thread E ")); Thread t6 = new Thread(new A("i am thread F ")); Thread t7 = new Thread(new A("i am thread G ")); Thread t8 = new Thread(new A("i am thread H ")); t1.setPriority(Thread.MAX_PRIORITY); t8.setPriority(Thread.MIN_PRIORITY); t1.start(); t2.start(); t3.start(); t4.start(); t5.start(); t6.start(); t7.start(); t8.start(); try { t1.join(); t2.join(); t3.join(); t4.join(); t5.join(); t6.join(); t7.join(); t8.join(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } System.out.println(A.j); } } ``` Upvotes: 1 <issue_comment>username_3: It is a common mistake to think that `synchronized` means "critical section", and that no other threads will run while a synchronized block is running. But synchronized blocks are only exclusive with respect to other synchronized blocks that *lock on the same lock*. The answers you got ("use a common lock") are right, but didn't really tell you why. The other common mistake is to think about `synchronized` as protecting *code*, when really you should be thinking about it protecting *data*. Any shared mutable data should be guarded by one and only one lock, and you should know exactly what that lock is. (The more complex your locking scheme, the less likely you'll know what locks guard what data.) So you should always be thinking in terms of "data X is guarded by lock L", and then make sure you acquire lock L whenever you access (read **or** write) that data. Upvotes: 4
2018/03/14
1,444
4,934
<issue_start>username_0: I'm creating a POC for a Ganttproject for my company. I'm using the Resource Gantt from Anychart version 8.1.0 in an Angular 5 web project. And I've set up most of the chart finding different documentations and suggestions across the internet, but I got stuck on the click event now. Following the examples from Anychart i found following code to listen to an event on the chart: ``` chart.listen("rowClick", function(event) { var msg = event['item'].get('name'); if (event['period']) msg += '\nPeriod: ' + event['period']['id']; console.log(msg); }); ``` If we go [to the anychart playground](https://playground.anychart.com/docs/8.1.0/samples/GANTT_Interactivity_04) we can change this code for the above mentioned snippet: ``` chart.listen('rowSelect', function(e) { e.item.remove(); }); ``` So here we see the event getting the item and period properties. But when I do this in my POC project the item and period properties are missing: This is the code snippet: ``` chart: anychart.charts.Gantt; ngOnInit() { this.chart = anychart.ganttResource(); } ngAfterViewInit() { this.ganttSvc.getGanttData(this.tokenManager.getUserId()).subscribe((values: GanttDataRow[]) => { // set data to grid const treedata = anychart.data.tree(values, 'as-table'); this.chart.data(treedata); // add to container and draw this.chart.container(this.container.nativeElement).draw(); // scale this.chart.zoomTo('day', 1, 'first-date'); // eventlisteners this.chart.listen('rowSelect', function() { event.preventDefault(); this.clickedDetail(event); }); }); clickedDetail(event) { if (event['period']) { this.selectedGanttItem = event['period']; this.toggleSideBar(); } ``` } Any ideas on what I'm doing wrong?<issue_comment>username_1: Unfortunately, there's a bug in index.d.ts related to chart listeners. In the current version of index.d.ts file the callback function of a chart listener doesn't take as an argument event object. That's why the item and period properties are missing. This issue is already known and our dev team is preparing a fix for it which will be available with the release of 8.2.0 version. We will notify you via this thread when the fix becomes available. Upvotes: 0 <issue_comment>username_1: Our team has prepared a dev-preview of 8.2.1 update and you can try it in your projects. This update includes many bug fixes of index.d.ts. To get this update replace in ‘package.json’ the current AnyChart dependence with the following "anychart": "8.2.1-rc.0" Or you can download it manually via NPM: $ npm install anychart@8.2.1-rc.0 In this update, you can get access to 'event' object in a listener. ``` this.chart.listen('rowSelect', function(event) { event.preventDefault(); }); ``` Very soon our team will provide public release of 8.2.1 update. Upvotes: 0 <issue_comment>username_1: Please, try the following code, this works with anychart@8.2.1-rc.0 perfectly: ``` chart: anychart.charts.Gantt = null; ngOnInit() { const rawData = [ { 'name': 'Activities', 'actualStart': Date.UTC(2007, 0, 25), 'actualEnd': Date.UTC(2007, 2, 14), 'children': [ { 'name': 'Draft plan', 'actualStart': Date.UTC(2007, 0, 25), 'actualEnd': Date.UTC(2007, 1, 3) }, { 'name': 'Board meeting', 'actualStart': Date.UTC(2007, 1, 4), 'actualEnd': Date.UTC(2007, 1, 4) }, { 'name': 'Research option', 'actualStart': Date.UTC(2007, 1, 4), 'actualEnd': Date.UTC(2007, 1, 24) }, { 'name': 'Final plan', 'actualStart': Date.UTC(2007, 1, 24), 'actualEnd': Date.UTC(2007, 2, 14) } ] }]; const treeData = anychart.data.tree(rawData, 'as-tree'); this.chart = anychart.ganttProject(); this.chart.data(treeData); this.chart.listen('rowSelect', function (event) { console.log(event['item'].get('name')); }); ngAfterViewInit() { this.chart.container(this.container.nativeElement); this.chart.draw(); } ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: My problem was in this section: ``` this.chart.listen('rowSelect', function (event) { console.log(event['item'].get('name')); }); ``` I had following code: ``` this.chart.listen('rowSelect', () => { this.clickedDetail(event); }); ``` as soon as I changed it to following it worked: ``` this.chart.listen('rowSelect', (event) => { this.clickedDetail(event); }); ``` Before I had hold of the MouseClickEvent itself and not the emitted event of the chart. Thanks for the example it helped a lot. Just one small question though. I noticed you use rowSelect, and I was using rowClick before. Are they very different in behaviour? Upvotes: 1
2018/03/14
524
1,907
<issue_start>username_0: I have installed data dog agent on one of my virtual machines When I have altered my NSG so that all "Outbound-connections" are denied, I am still able to see "CPU metric" getting updated on Data dog dashboard. I would like to know where this information is going from Azure to Datadog.<issue_comment>username_1: Datadog, like OMS and other monitoring software uses the Azure VM agent to steam the information. Once this agent is installed on the system we are able to gather the info needed. The VM agent is not something that goes out over the internet like other connections. Hence, you should still see the reporting available. Rather, it should be a direct connection from the Hyper-V manager and the VM itself. This therefore, bypassing any NSG rules you would have in place. Upvotes: 2 [selected_answer]<issue_comment>username_2: > > I have installed data dog agent on one of my virtual machines > > > Datadog agent will collect system metrics and **forward** to Datadog. Datadog agent works like this: [![enter image description here](https://i.stack.imgur.com/wVK1p.png)](https://i.stack.imgur.com/wVK1p.png) Also you can try to perform a network capture on your Azure VM, then we are able to find the detailed of the agent behavior. Here is the network capture in my test VM: [![enter image description here](https://i.stack.imgur.com/lSPzj.png)](https://i.stack.imgur.com/lSPzj.png) We can find that **Datadog agent forward over HTTPS(443) to Datadog HQ**. After you deny port 443 in NSG outbound rules, the datadog will not get your metrics: [![enter image description here](https://i.stack.imgur.com/XURQB.png)](https://i.stack.imgur.com/XURQB.png) More information about datadog agent, please refer to this official [article](https://help.datadoghq.com/hc/en-us/articles/203034929-What-is-the-Datadog-Agent-What-resources-does-it-consume-). Upvotes: 0
2018/03/14
443
1,543
<issue_start>username_0: I started learning angularjs in w3schools tutorials. Below was an example that was given to write your custom services ``` app.service('hexafy', function() { this.myFunc = function (x) { return x.toString(16); } }); app.controller('myCtrl', function($scope, hexafy) { $scope.hex = hexafy.myFunc(255); }); ``` Why doesn't the below work - ``` app.service('hexafy', function(x) { return x.toString(16); }); app.controller('myCtrl', function($scope, hexafy) { $scope.hex = hexafy(255); }); ``` Here I am letting the function accept an argument and processing it without creating a new function within it. But this approach doesn't seem to work.<issue_comment>username_1: Service is supposed to be *instantiatable* function, so Angular creates instance of the service for you, like a constructor function. If you just want to register a function to call without constructing an instance use value: ``` app.value('hexafy', function(x) { return x.toString(16); }); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You need a method are a value to access the service class, It is better to keep a method for your need, I believe below is the code which is required for your need. ``` app.service('hexafy', function() { this.toString = function(x){ return x.toString(16); } }); app.controller('myCtrl', function($scope, hexafy) { $scope.hex = hexafy.toString(255); }); ``` or you can access it as the value just like the above answer. Upvotes: 1
2018/03/14
1,238
4,622
<issue_start>username_0: I'm trying to call API using Retrofit and Android architecture components but I'm getting this error > > java.lang.RuntimeException: Failed to invoke public android.arch.lifecycle.LiveData() with no args > > > this is the data class for the response ``` data class ForecastResult(val city: City, val list: List) ``` Services Interface ``` interface ServicesApis { // using dummy data @GET("data/2.5/forecast/") fun getForecast(@Query("APPID") APPID: String = "xxxxxxxx" , @Query("q") q: String = "94043", @Query("mode") mode: String = "json", @Query("units") units: String = "metric" , @Query("cnt") cnt: String = "7"): Call> } ``` and the api implementation ``` class WeatherRepoImpl : WeatherRepo { override fun getDailyForecast(): LiveData> { val forecast: MutableLiveData> = MutableLiveData() RestAPI.getAPIsrevice().getForecast().enqueue(object : Callback> { override fun onResponse(call: Call>?, response: Response>?) { when { response!!.isSuccessful -> { forecast.postValue(Resource.success(response.body()?.value)) } else -> { val exception = AppException(responseBody = response.errorBody()) forecast.postValue(Resource.error(exception)) } } } override fun onFailure(call: Call>?, t: Throwable?) { val exception = AppException(t) forecast.postValue(Resource.error(exception)) } }) return forecast } } ``` appreciate your help!<issue_comment>username_1: Yes so probably your API's response is not being correctly serialized. Anyway, it doesn't make sense to wrap your `LiveData` to an API response. Just have your exposed object like so: ``` someLivedataObject: LiveData // So whenever the APIs response comes in: someLivedataObject.postValue(YourApiResponseModel) ``` If for some reason doesn't work use ``` someLivedataObject.setValue(YourApiResponseModel) ``` You can read more about the difference between these on the documentation, notice that I'm calling the `LiveData#setValue()` method, **do not** use Kotlin's deference setter method. And since your view is observing changes to your exposed `someLivedataObject` the UI is updated. --- This is similar to the RxJava counterpart where API responses with Observable wrapper don't really make sense, it's not a continuous stream of data, hence makes more sense to use `Single`. --- Take these suggestions with a grain of salt, I'm not **fully aware** of your application's use cases and **there are exceptions to these rules**. Upvotes: 0 <issue_comment>username_2: I use `LiveDataAdapter` to convert `Call` to `LiveData`, that way you don't need wrap your Retrofit response as `Call>`. This adapter is similar to Rx adapter by Jake. Get the `LiveDataAdapterFactory` and `LiveDataAdapter` from [here](https://github.com/googlesamples/android-architecture-components/blob/master/GithubBrowserSample/app/src/main/java/com/android/example/github/util/LiveDataCallAdapterFactory.java). After adding the Adapter, you'll need to set it: ``` Retrofit.Builder() .addConverterFactory(GsonConverterFactory.create(gson)) .addCallAdapterFactory(LiveDataCallAdapterFactory()) .baseUrl(BASE_URL) .client(httpClient) .build() ``` Self Promotion: I have made a video on the same over at: <https://caster.io/lessons/android-architecture-components-livedata-with-retrofit> Upvotes: 0 <issue_comment>username_3: Remove LiveData from API call and create `ViewModel` class have contained in `MutableLiveData` Object. For Example: API Call Definition something like this: ***(Remove LiveData)*** ``` @GET("data/2.5/forecast/") fun getForecast(@Query("APPID") APPID: String = "xxxxxxxx" , @Query("q") q: String = "94043", @Query("mode") mode: String = "json", @Query("units") units: String = "metric" , @Query("cnt") cnt: String = "7"): Call ``` Create One ViewModel Class: ``` class YourViewModel: ViewModel() { var allObjLiveData = MutableLiveData() fun methodForAPICall(){ mApiService?.getForecast(.....)?.enqueue(object : Callback { override fun onFailure(call: Call, t: Throwable) { allObjLiveData.value = null } override fun onResponse(call: Call, response: Response) { allObjLiveData.value=response.body() //Set your Object to live Data } }) } } ``` Now in your activity or fragment initialization ViewModel class. And observe to live data. ``` yourViewModelClassObj?.allObjLiveData?.observe(this, Observer { //whenever object change this method call every time. } }) ``` So You can use livedata in ViewModel class. Upvotes: 2
2018/03/14
1,735
6,056
<issue_start>username_0: I am missing something simple but I can't quite figure it out. I need to essentially get the Max Count of a Column. ``` SELECT S.Type AS Type, S.Version, COUNT(R.FK_StoveNbr) AS TimesRepaired FROM STOVE AS S FULL JOIN STOVE_REPAIR AS R ON S.SerialNumber = R.FK_StoveNbr GROUP BY S.Type, S.Version ORDER BY TimesRepaired DESC; ``` This gives me the information I need, but now I need to make it so it only displays the Type with the highest number from the COUNT(R.FK\_StoveNbr). Some information: ``` [STOVE_REPAIR]( [RepairNbr] [int] NOT NULL, [FK_StoveNbr] [int] NOT NULL, [RepairDt] [smalldatetime] NOT NULL, [Description] [varchar](500) NULL, [Cost] [numeric](7, 2) NULL, [FK_CustomerID] [int] NOT NULL, [FK_EmpID] [int] NULL, [STOVE]( [SerialNumber] [int] NOT NULL, [Type] [char](15) NOT NULL, [Version] [char](15) NULL, [DateOfManufacture] [smalldatetime] NULL, [Color] [varchar](12) NULL, [FK_EmpId] [int] NULL, ``` Needed output: ``` type version times repaired --------- ----- -------------- FiredAlways 2 2 ``` The Output I have: ``` type version times repaired --------- ----- -------------- FiredAlways 2 2 FiredAlways 1 1 FiredNow 2 1 FiredNow 3 1 FiredNow 1 1 FiredAlways 4 0 FiredAlways 5 0 FiredAlways 3 0 FiredAtCamp 3 0 FiredAtCamp 1 0 FiredAtCamp 2 0 ```<issue_comment>username_1: If you want to select the N records that have the highest count, you can do the following: ``` ;WITH CountResults AS ( SELECT S.Type AS Type, S.Version, COUNT(R.FK_StoveNbr) AS TimesRepaired FROM STOVE AS S FULL JOIN STOVE_REPAIR AS R ON S.SerialNumber = R.FK_StoveNbr GROUP BY S.Type, S.Version ), MaxCountResults AS ( SELECT MaxCountResult = MAX(C.TimesRepaired) FROM CountResults AS C ) SELECT C.Type, C.Version, C.TimesRepaired FROM CountResults AS C INNER JOIN MaxCountResults AS M ON C.TimesRepaired = M.MaxCountResult ORDER BY C.TimesRepaired DESC ``` Basically you determine the highest number first, then filter the whole results by that number. If you just need 1 with the highest, a simple `TOP` should suffice: ``` SELECT TOP 1 S.Type AS Type, S.Version, COUNT(R.FK_StoveNbr) AS TimesRepaired FROM STOVE AS S FULL JOIN STOVE_REPAIR AS R ON S.SerialNumber = R.FK_StoveNbr GROUP BY S.Type, S.Version ORDER BY TimesRepaired DESC ``` Upvotes: 1 <issue_comment>username_2: Use `TOP 1` or `TOP (1) WITH TIES`. The latter is used when you have ties and you want all the highest numbers. However, I question your use of `FULL JOIN`. Doesn't this do what you want? ``` SELECT TOP (1) S.Type AS Type, S.Version, COUNT(R.FK_StoveNbr) AS TimesRepaired FROM STOVE S INNER JOIN STOVE_REPAIR R ON S.SerialNumber = R.FK_StoveNbr GROUP BY S.Type, S.Version ORDER BY TimesRepaired DESC; ``` Because you are getting the highest number, I think we can assume that there is at least one repair record for the stove that is returned. Upvotes: 2 <issue_comment>username_3: The `MAX` and `COUNT` are two different functions that return two different values. You say > > I need to essentially get the Max Count of a Column. > This gives me the information I need, but now I need to make it so it > only displays the Type with the highest number from the > COUNT(R.FK\_StoveNbr) > > > When you run the code you referenced... ``` SELECT S.Type AS Type, S.Version, COUNT(R.FK_StoveNbr) AS TimesRepaired FROM STOVE AS S FULL JOIN STOVE_REPAIR AS R ON S.SerialNumber = R.FK_StoveNbr GROUP BY S.Type, S.Version ORDER BY TimesRepaired DESC; ``` You get a table that has the "TYPE", "VERSION" and "COUNT" - which represents the number of times that type and version appear in the data set. You say that you need just the 1 record... ``` type version times repaired --------- ----- -------------- FiredAlways 2 2 ``` Would there be a need for other "MAX" of other types? Such as... ``` type version times repaired --------- ----- -------------- FiredAlways 2 2 FiredAtCamp 3 0 FiredNow 2 1 ``` There is always a few ways to do things, depending on how things are setup and what you are truly looking for. Which is why I restate things above and ask those questions, just for clarity. In order to get something like what I had posted, what I would recommend doing is a JOIN to a SUBQUERY. Where the subquery is looking for the MAX of the COUNT based on the TYPE. Then this subquery is joined back to the original result set using the TYPE and the COUNT. Off the top of my head it could be such as.... ``` SELECT S.Type AS Type, S.Version, COUNT(R.FK_StoveNbr) AS TimesRepaired FROM STOVE AS S FULL JOIN STOVE_REPAIR AS R ON S.SerialNumber = R.FK_StoveNbr INNER JOIN ( SELECT S2.Type AS Type, MAX(COUNT(R2.FK_StoveNbr)) AS MXTimesRepaired FROM STOVE AS S2 FULL JOIN STOVE_REPAIR AS R2 ON S2.SerialNumber = R2.FK_StoveNbr ) AS SI ON S.Type = SI.Type GROUP BY S.Type, S.Version HAVING COUNT(R.FK_StoveNbr) = SI.MXTimesRepaired ORDER BY TimesRepaired DESC; ``` I know that would give you a slightly smaller result set that has a distinct set that would still have multiple records based on the type. However, without knowing more about some of the other fields, the question needs to be "what makes that record you are looking for the MAX or only record needed?" Is there a ID field that we can look at or something else that would make us want to look for that one record? Upvotes: 0
2018/03/14
581
2,332
<issue_start>username_0: I need to perform a very simple operation that involve network. I know that this must be done with an Async Task, because run a task that involves Network operations on main thread is bad. Since is pretty verbose using the classic way ``` private class LongOperation extends AsyncTask { @Override protected String doInBackground(String... params) { //to do return "Executed"; } @Override protected void onPostExecute(String result) { //to do } @Override protected void onPreExecute() {} @Override protected void onProgressUpdate(Void... values) {} } ``` for a method static method invocation that must download only few bits I'm wondering if there is some more concise form or alternative that I could use.<issue_comment>username_1: You don't need an `AsyncTask`. It is just a convenience to use because it already has callback methods. You can create a new `Thread` and execute your network call there. Upvotes: 1 <issue_comment>username_2: There are many ways to create a worker thread. It depends on what are you doing with network. * If you just want to do **simple network operations** such as download some JSON data then get it back to update UI: use `AsyncTask`. * If you want to do **long network operations** which involve moderate to large amounts of data (either uploading or downloading): use `Thread`. * If you want to continuously send/receive message to/from the Internet, use `HandlerThread`. So in conclusion: `AsyncTask` is already the **simplest and easiest** to use. Beside, you don't need to override all methods, just override `doInBackGround()` and `onPostExecute()` if you want to receive data. See: [Asynctask vs Thread in android](https://stackoverflow.com/questions/18480206/asynctask-vs-thread-in-android) Upvotes: 1 <issue_comment>username_3: You can always use `AsyncTask.execute(() -> { /* network operations */ });`, the downside is you don't get any of the callbacks so I tend to only use this when prototyping or with *very* simple tasks. Upvotes: 0 <issue_comment>username_4: Use an anonymous class inside a method: ``` public void asyncOperation () { new AsyncTask() { @Override protected Boolean doInBackground(String... params) { // Your background operation, network, database, etc return true; } }.execute(); } ``` Upvotes: 0
2018/03/14
1,567
5,255
<issue_start>username_0: I am get a formula to calculate the Net Present Value using Rent, Number of periods, Discount Rate and Future Value.I am able to get the Present Value however, I need a formula to calculate the Net Present Value on today's date or any date a user inputs. My code is as below: ```js function PV() { var future = 5000; var type = 1; periods = 12; rate = document.getElementById("rate").value; var ratePercent = periods * 100; rate = Math.pow(1 + rate / 100, 1 / 365) - 1; rate = eval(rate); periods = eval(periods); // Return present value if (rate === 0) { document.getElementById("presentResultValue").value = -payment * periods - future; } else { document.getElementById("presentResultValue").value = ( (((1 - Math.pow(1 + rate, periods)) / rate) * payment * (1 + rate * type) - future) / Math.pow(1 + rate, periods) ).toFixed(2); } } ``` I am also using Excel to calculate this but need a way to convert it to Javascript. I am also attaching my work with excel. [ExcelNPV](https://i.stack.imgur.com/0YddP.png) I am still learning JavaScript so any help will be greatly appreciated. Thank you.<issue_comment>username_1: One big thing you are missing with this is the cost. You have periods, you have rate, you have rate percent. But there is nothing there for the "cost". You say you have this in Excel, and other languages. If you understand the calculations in Excel, then you may understand how you should do it in JavaScript. In Excel you use the nifty function `=NPV(....` where it takes 2 arguments. The first is the "RATE" and the second is the "VALUE" which can be multiple values. When you do this in Excel, one of the values you pass in would be the total cost. So you have something like this.... `=NPV(2%,[total cost],[Year1Value],[Year2Value],[Year3Value],....)` or `=NPV(A1,A2:A12)` The "Total Cost" would go where you spend the money....assuming its at the end of Year 1, it would go before the "value" / return from year one. With that being said, another thing to consider is determining WHEN the cost was needed. If the cost is upfront, then it would be taken out of the "array" of values section and added to the NPV calculations such as.... `=NPV(A1,A3:A12) + A2` Where cell A2 is the Cost (upfront) and A1 is the rate and A3-A12 are all the returns. As a simple example, if you have the rate somewhere on the page, you can loop through the arguments that are going to be passed to it such as below.... ``` function NPV() { var args = []; for (i = 0; i < arguments.length; i++) { args = args.concat(arguments[i]); } var rate = document.getElementById("rate").value; var value = 0; // loop through each argument, and calculate the value // based on the rate, and add it to the "value" variable // creating a running total for (var i = 1; i < args.length; i++) { value = value + ((args[i])/(Math.pow(1 + rate, i))); } return value; } ``` Additionally, you could also look for a library such as Finanace.JS <http://financejs.org/> Upvotes: 0 <issue_comment>username_2: Came across **NPV formula** on my project and had to create a function for it. Finance.js is great but I didn't want to clutter my code with it because my problem was just getting this formula. **ES5 Function** ```js /** * Calculates the Net Present Value of a given initial investment * cost and an array of cash flow values with the specified discount rate. * * @param {number} rate - The discount rate percentage * @param {number} initialCost - The initial investment * @param {array} cashFlows - An array of future payment amounts * @return {number} The calculated Net Present Value */ function getNPV(rate, initialCost, cashFlows) { var npv = initialCost; for (var i = 0; i < cashFlows.length; i++) { npv += cashFlows[i] / Math.pow(rate / 100 + 1, i + 1); } return npv; } ``` **Using the function:** ```js var rate = 10; var initialCost = -25000; var cashFlows = [-10000, 0, 10000, 30000, 100000]; console.log(getNPV(rate, initialCost, cashFlows)); // expected output: 56004.77488497429 ``` **JavaScript ES6 Function:** > > <https://www.evermade.fi/net-present-value-npv-formula-for-javascript/> > > > ```js /** * Calculates the Net Present Value of a given initial investment * cost and an array of cash flow values with the specified discount rate. * @param {number} rate - The discount rate percentage * @param {number} initialCost - The initial investment * @param {array} cashFlows - An array of future payment amounts * @return {number} The calculated Net Present Value */ function getNPV(rate, initialCost, cashFlows) { return cashFlows.reduce( (accumulator, currentValue, index) => accumulator + currentValue / Math.pow(rate / 100 + 1, index + 1), initialCost ); } ``` I've explained most of it on my blogpost. Hope this helps :) Upvotes: 2 <issue_comment>username_3: This worked for me ``` function getNPV(rate, a, payment) { var npv = 0; for (var i = 0; i < a.length; i++) { npv += payment / Math.pow(1 + rate, a[i]); console.log(payment / Math.pow(1 + rate, a[i])) } return npv; } ``` Upvotes: 0
2018/03/14
755
2,349
<issue_start>username_0: I can read every pixel' RGB of the image already, but I don't know how to change the values of RGB to a half and save as a image.Thank you in advance. ``` from PIL import * def half_pixel(jpg): im=Image.open(jpg) img=im.load() print(im.size) [xs,ys]=im.size #width*height # Examine every pixel in im for x in range(0,xs): for y in range(0,ys): #get the RGB color of the pixel [r,g,b]=img[x,y] ```<issue_comment>username_1: If you have Numpy and Matplotlib installed, one solution would be to convert your image to a numpy array and then e.g. save the image with matplotlib. ``` import matplotlib.pyplot as plt import numpy as np from PIL import Image img = Image.open(jpg) arr = np.array(img) arr = arr/2 # divide each pixel in each channel by two plt.imsave('output.png', arr.astype(np.uint8)) ``` Be aware that you need to have a version of PIL >= 1.1.6 Upvotes: 0 <issue_comment>username_2: There are many ways to do this with Pillow. You can use [Image.point](https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.point), for example. ``` # Function to map over each channel (r, g, b) on each pixel in the image def change_to_a_half(val): return val // 2 im = Image.open('./imagefile.jpg') im.point(change_to_a_half) ``` The function is actually only called 256 times (assuming 8-bits color depth), and the resulting map is then applied to the pixels. This is much faster than running a nested loop in python. Upvotes: 1 <issue_comment>username_3: You can do everything you are wanting to do within PIL. If you are wanting to reduce the value of every pixel by half, you can do something like: ``` import PIL im = PIL.Image.open('input_filename.jpg') im.point(lambda x: x * .5) im.save('output_filename.jpg') ``` You can see more info about point operations here: <https://pillow.readthedocs.io/en/latest/handbook/tutorial.html#point-operations> Additionally, you can do arbitrary pixel manipulation as: `im[row, col] = (r, g, b)` Upvotes: 2 [selected_answer]<issue_comment>username_4: * get the RGB color of the pixel ``` [r,g,b]=img.getpixel((x, y)) ``` * update new rgb value ``` r = r + rtint g = g + gtint b = b + btint value = (r,g,b) ``` * assign new rgb value back to pixel ``` img.putpixel((x, y), value) ``` Upvotes: 2
2018/03/14
1,256
3,965
<issue_start>username_0: When using [scipy.ndimage.interpolation.shift](https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.ndimage.interpolation.shift.html) to shift a numpy data array along one axis with periodic boundary treatment (`mode = 'wrap'`), I get an unexpected behavior. The routine tries to force the first pixel (`index 0`) to be identical to the last one (`index N-1`) instead of the "last plus one (`index N`)". Minimal example: ``` # module import import numpy as np from scipy.ndimage.interpolation import shift import matplotlib.pyplot as plt # print scipy.__version__ # 0.18.1 a = range(10) plt.figure(figsize=(16,12)) for i, shift_pix in enumerate(range(10)): # shift the data via spline interpolation b = shift(a, shift=shift_pix, mode='wrap') # plotting the data plt.subplot(5,2,i+1) plt.plot(a, marker='o', label='data') plt.plot(np.roll(a, shift_pix), marker='o', label='data, roll') plt.plot(b, marker='o',label='shifted data') if i == 0: plt.legend(loc=4,fontsize=12) plt.ylim(-1,10) ax = plt.gca() ax.text(0.10,0.80,'shift %d pix' % i, transform=ax.transAxes) ``` Blue line: data before the shift Green line: expected shift behavior Red line: actual shift output of [scipy.ndimage.interpolation.shift](https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.ndimage.interpolation.shift.html) Is there some error in how I call the function or how I understand its behavior with `mode = 'wrap'`? The current results are in contrast to the mode parameter description from the related [scipy tutorial page](https://docs.scipy.org/doc/scipy-0.18.1/reference/tutorial/ndimage.html#filter-functions) and from another [StackOverflow post](https://stackoverflow.com/questions/22669252/how-exactly-does-the-reflect-mode-for-scipys-ndimage-filters-work). Is there an off-by-one-error in the code? Scipy version used is 0.18.1, distributed in anaconda-2.2.0 [![enter image description here](https://i.stack.imgur.com/QNJo2.png)](https://i.stack.imgur.com/QNJo2.png)<issue_comment>username_1: If you have Numpy and Matplotlib installed, one solution would be to convert your image to a numpy array and then e.g. save the image with matplotlib. ``` import matplotlib.pyplot as plt import numpy as np from PIL import Image img = Image.open(jpg) arr = np.array(img) arr = arr/2 # divide each pixel in each channel by two plt.imsave('output.png', arr.astype(np.uint8)) ``` Be aware that you need to have a version of PIL >= 1.1.6 Upvotes: 0 <issue_comment>username_2: There are many ways to do this with Pillow. You can use [Image.point](https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.point), for example. ``` # Function to map over each channel (r, g, b) on each pixel in the image def change_to_a_half(val): return val // 2 im = Image.open('./imagefile.jpg') im.point(change_to_a_half) ``` The function is actually only called 256 times (assuming 8-bits color depth), and the resulting map is then applied to the pixels. This is much faster than running a nested loop in python. Upvotes: 1 <issue_comment>username_3: You can do everything you are wanting to do within PIL. If you are wanting to reduce the value of every pixel by half, you can do something like: ``` import PIL im = PIL.Image.open('input_filename.jpg') im.point(lambda x: x * .5) im.save('output_filename.jpg') ``` You can see more info about point operations here: <https://pillow.readthedocs.io/en/latest/handbook/tutorial.html#point-operations> Additionally, you can do arbitrary pixel manipulation as: `im[row, col] = (r, g, b)` Upvotes: 2 [selected_answer]<issue_comment>username_4: * get the RGB color of the pixel ``` [r,g,b]=img.getpixel((x, y)) ``` * update new rgb value ``` r = r + rtint g = g + gtint b = b + btint value = (r,g,b) ``` * assign new rgb value back to pixel ``` img.putpixel((x, y), value) ``` Upvotes: 2
2018/03/14
819
3,266
<issue_start>username_0: I'm currently evaluating VSTS, but I'm concerned about some of the limitations of the "free" version. I believe there is a 10 GB storage limit. Is this for everything (source code, build artifacts, packages, etc.), and is there any way to increase this? I've also seen a limit of four hours of build time per month - that's only 12 minutes a day! I'm finding that even a small solution takes a few minutes to build; our "real" solutions are significantly larger and we often check in code many times during a typical day. What happens if this build limit is exceeded? Are you prevented from building until the next billing month? Am I right in saying that purchasing a hosted pipeline (aka "hosted CI/CD") at US$40/month would overcome this limit?<issue_comment>username_1: I'm not sure where you got that idea from. There are no limits on storage for source code, packages, build artifacts, or test results that I'm aware of. There is a [10 GB limit](https://learn.microsoft.com/en-us/vsts/build-release/concepts/agents/hosted#capabilities-and-limitations) for hosted build agents, but that just refers to the amount of source code/build output that can be present on a single instance of the hosted agent. Honestly, if your source code is anywhere near 10 GB, you're going to find the hosted build agents to be inadequate anyway. Regarding build, refer to the [documentation](https://learn.microsoft.com/en-us/vsts/billing/buy-more-build-vs). You can either pay for [Microsoft-hosted agents](https://marketplace.visualstudio.com/items?itemName=ms.build-release-hosted-pipelines) or purchase [private pipelines](https://marketplace.visualstudio.com/items?itemName=ms.build-release-private-pipelines), which enable you to set up your own build infrastructure. Every VSTS account has one free private pipeline, so you actually don't need to pay anything extra at all, assuming you take on the job of maintaining your own build/release server and only need one thing to run at a time. Upvotes: 3 [selected_answer]<issue_comment>username_2: The "free" VSTS, as you say, has a limit of five users with basic access. For stakeholders, you can add as many as you need. For build, you have up to 4 h/month. But if you want to use CI, that is probably not enough. If you will use it only to build at certain points manually, it could be a start. With your free account you could download and install a private build agent. This will have no minute limits. So you could implement a CI build, for instance. Hosted agents have up to 10 GB of storage. But again, if you use a private one, you will not have this limit. For other stuff like code, workitems and so on as far as I know there are no limits. [Here](https://learn.microsoft.com/en-us/vsts/billing/buy-more-build-vs) you can see how to buy more hosted agents. Depending on your needs, you could go after Microsoft Action Pack, which will give you internal licenses for other Microsoft software as well as more VSTS users via an MSDN subscription. Since you are evaluating, you can take a look at [this link](https://learn.microsoft.com/en-us/vsts/collaborate/rate-limits) for more global resource limitations, but they are pretty high, since Microsoft itself uses VSTS. Upvotes: 1
2018/03/14
586
2,500
<issue_start>username_0: If I find that some of the user exists with such a parameters, I want to get 1 otherwise 0. In the future I'll have to add more blocks. But it doesn't seem to work now. What am I doing wrong? ``` SELECT CAST(CASE WHEN EXISTS(SELECT 1 FROM Customers WHERE Country = 'France' AND PostalCode%2 = 0) OR (WHERE Country = 'Germany' AND PostalCode%2 = 0)) ) THEN 1 ELSE 0 END AS BIT) ```<issue_comment>username_1: You need two separate `exists`: ``` SELECT CAST(CASE WHEN EXISTS (SELECT 1 FROM Customers WHERE Country = 'France' AND PostalCode%2 = 0 ) THEN 1 WHEN EXISTS (SELECT 1 FROM Customers WHERE Country = 'Germany' AND PostalCode%2 = 0 ) THEN 1 ELSE 0 END AS BIT) ``` Actually, I broke this into two separate `THEN` clauses. This is almost equivalent to using `OR`, but because the logic is inside a `CASE`, `THEN` seems more natural. (The difference is that the optimizer could choose to re-arrange the `OR` conditions, but the `THEN` conditions are executed in lexical order.) If your statements are actually this simple, you can combine them as: ``` SELECT CAST(CASE WHEN EXISTS (SELECT 1 FROM Customers WHERE Country IN ('France', 'Germany') AND PostalCode%2 = 0 ) THEN 1 ELSE 0 END AS BIT) ``` Upvotes: 2 <issue_comment>username_2: It looks to me like you're just having issues with your bracketing: ``` SELECT CAST( CASE WHEN EXISTS( SELECT 1 FROM Customers WHERE (Country = 'France' AND PostalCode%2 = 0) OR (Country = 'Germany' AND PostalCode%2 = 0) ) THEN 1 ELSE 0 END AS BIT) ``` Building on Gordon's assumption that `PostalCode%2 = 0` for all tested 'sets' of conditionals (you haven't said as much yet), you could likewise shorten this to: ``` SELECT CAST( CASE WHEN EXISTS( SELECT 1 FROM Customers WHERE PostalCode%2 = 0 AND Country IN ('France', 'Germany') ) THEN 1 ELSE 0 END AS BIT) ``` Upvotes: 0
2018/03/14
410
1,515
<issue_start>username_0: I want to check if my response object contains mentioned properties using chai should assertion. Below is my code snippet: ``` chai.request(app) .post('/api/signup') .send( data ) .then(function (response) { response.should.have.status(200); response.body.should.have.property('active', 'mobileNumber', 'countryCode', 'username', 'email', 'id', 'createdAt', 'updatedAt', 'location'); done(); }) .catch(function (error) { done(error); }) ``` But I am getting below error: [![enter image description here](https://i.stack.imgur.com/I1wF9.png)](https://i.stack.imgur.com/I1wF9.png)<issue_comment>username_1: Found how to achieve it using mocha chai-should, Function name is `.should.have.keys()` Just pass the properties to this function and it will check they should be present in the object you are testing. bellow is the code ``` response.body.should.have.keys('active', 'mobileNumber', 'countryCode', 'username', 'email', 'id', 'organisationId', 'createdAt', 'updatedAt', 'location'); ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: If you're exclusively utilising Chai (in Postman, for example), the following string of functions will successfully aid you: ``` response.body.to.include.all.keys("active", "mobileNumber"); ``` [The full list of API's (including an example of the above) can be found here.](https://www.chaijs.com/api/bdd/) Upvotes: 2
2018/03/14
431
1,506
<issue_start>username_0: I am trying to get a list of users, who submitted a specific event in a website, however when I run the query, I am not getting the full result set - for example, I found 2 users who had the event (and I used the same conditions), but are not in the result set. The script looks like: ``` select userid , Date from c where year(c.Date) = 2018 and week(c.Date) = (week(getdate()) - 1) and Event in ('existing', 'submit_existing', 'submit_new') group by 1,2 ``` Can anybody give me a hint what might be the issue? If anything is unclear or if you have any questions, let me know! Thanks!<issue_comment>username_1: so I managed to find the answer! The reason for my problem is that in dbeaver there is a `Maximum result-set size` filed and if you have a lower number than the result set you are expecting - you will get duplicates and missing data, because each time it runs it shows a new results set. Hope this can help somebody! Upvotes: 1 <issue_comment>username_2: <https://github.com/dbeaver/dbeaver/issues/1708> `Ctrl+Alt+Shift+A` or right click menu: **Execute** => **Select all rows**. Upvotes: 3 <issue_comment>username_3: If you want to change the default number of returned rows for all future times [![configure the default number of rows returned](https://i.stack.imgur.com/zYPbE.png)](https://i.stack.imgur.com/zYPbE.png) In dbeaver, you have to go to the "window" menu, then select preferences, then navigate to Editors / Results Upvotes: -1
2018/03/14
1,718
6,232
<issue_start>username_0: I am not sure the question is well put, because I understood how, but I don't know to write the questions with the thing I don't understand. Here it is: I have some classes: ``` class Animal{}; class Rabbit{}: public Animal; class Horse{}: public Animal; class Mouse{}: public Animal; class Pony{}: public Horse; ``` My goal was to find the maximum size from this object list in order to use it in memory allocation afterwards. I've stored each `sizeof` of the object in an array then took the max of the array. The superior(to whom I send the code for review) suggested me to use an union in order to find maximum size **at pre-compilation** time. The idea seemed very nice to me so I've did it like this: ``` typedef union { Rabbit rabbitObject; Horse horseObject; Mouse mouseObject; Pony ponyObject; } Size; ``` ... because an union allocates memory according to the greatest-in-size element. The next suggestion was to do it like this: ``` typedef union { unsigned char RabbitObject[sizeof(Rabbit)]; unsigned char HorseObject[sizeof(Horse)]; unsigned char MouseObject[sizeof(Mouse)]; unsigned char PonyObject[sizeof(Pony)]; } Interesting; ``` My question is: How does `Interesting` union get the maximum size of object? To me, it makes no sense to create an array of type unsigned char, of length `sizeof(class)` inside it. Why the second option would solve the problem and previous union it doesn't? What's happening behind and I miss? PS: The conditions are in that way that I cannot ask the guy personally. Thank you in advance<issue_comment>username_1: The assumptions are incorrect, and the question is moot. Standard does not require the union size to be equal of the size of the largest member. Instead, it requires union size to be **sufficient** to hold the largest member, which is not the same at all. Both solutions are flawed is size of the largest class needs to be known exactly. Instead, something like that should be used: ``` template struct max\_size : std::integral\_constant::value)> { }; template struct max\_size : std::integral\_constant { }; ``` As @Caleth suggested below, it could be shortened using initializer list version of `std::max` (and template variables): ``` template constexpr size\_t max\_size\_v = std::max({sizeof(Ts)...}); ``` Upvotes: 3 <issue_comment>username_2: You superior suggested you use the array version because a union could have padding. For instance if you have ``` union padding { char arr[sizeof (double) + 1]; double d; }; ``` The this could either be of size `sizeof(double) + 1` or it could be `sizeof (double) * 2` as the union could be padded to keep it aligned for `double`'s ([Live example](http://coliru.stacked-crooked.com/a/0b6b2fa57448bbbd)). However if you have ``` union padding { char arr[sizeof(double) + 1]; char d[sizeof(double)]; }; ``` The the union need not be double aligned and the union most likely has a size of `sizeof(double) + 1` ([Live example](http://coliru.stacked-crooked.com/a/9e87023fa56a840c)). This is not guanrteed though and the size can be greater than it's largest element. If you want for sure to have largest size I would suggest using ``` auto max_size = std::max({sizeof(Rabbit), sizeof(Horse), sizeof(Mouse), sizeof(Pony)}); ``` Upvotes: 0 <issue_comment>username_3: The two approaches provide a way to find a maximum size that all of the objects of the union will fit within. I would prefer the first as it is clearer as to what is being done and the second provides nothing that the first does not for your needs. And the first, a union composed of the various classes, offers the ability to access a specific member of the union as well. See also [Is a struct's address the same as its first member's address?](https://stackoverflow.com/questions/9254605/is-a-structs-address-the-same-as-its-first-members-address) as well as [sizeof a union in C/C++](https://stackoverflow.com/questions/740577/sizeof-a-union-in-c-c) and [Anonymous union and struct [duplicate]](https://stackoverflow.com/questions/25542390/anonymous-union-and-struct). For some discussions on memory layout of classes see the following postings: * [Structure of a C++ Object in Memory Vs a Struct](https://stackoverflow.com/questions/422830/structure-of-a-c-object-in-memory-vs-a-struct) * [How is the memory layout of a class vs. a struct](https://stackoverflow.com/questions/26939609/how-is-the-memory-layout-of-a-class-vs-a-struct) * [memory layout C++ objects [closed]](https://stackoverflow.com/questions/1632600/memory-layout-c-objects) * [C++ Class Memory Model And Alignment](https://stackoverflow.com/questions/6881172/c-class-memory-model-and-alignment) * [What does an object look like in memory? [duplicate]](https://stackoverflow.com/questions/12378271/what-does-an-object-look-like-in-memory) * [C++11 introduced a standardized memory model. What does it mean? And how is it going to affect C++ programming?](https://stackoverflow.com/questions/6319146/c11-introduced-a-standardized-memory-model-what-does-it-mean-and-how-is-it-g?rq=1) Since the compiler is free to add to the sizes of the various components in order to align variables on particular memory address boundaries, the size of the union may be larger than the actual size of the data. Some compilers offer a pragma or other type of directive to instruct the compiler as to whether packing of the class, struct, or union members should be done or not. The size as reported by `sizeof()` will be the size of the variable or type specified however again this may include additional unused memory area to pad the variable to the next desirable memory address alignment. See [Why isn't sizeof for a struct equal to the sum of sizeof of each member?](https://stackoverflow.com/questions/119123/why-isnt-sizeof-for-a-struct-equal-to-the-sum-of-sizeof-of-each-member). Typically a class, struct, or union is sized so that if an array of the type is created then each element of the array will begin on the most useful memory alignment such as a double word memory alignment for an Intel x86 architecture. This padding is typically on the end of the variable. Upvotes: 1 [selected_answer]
2018/03/14
792
2,677
<issue_start>username_0: I have a string with a known beginning and end, but I want to match only the unknown center. For example, say you knew you were going to have strings which said "I had \_\_\_\_\_\_\_\_ for lunch today" and you only wanted to match the blank. Here's what I have tried: `^I had (.*) for lunch today$` Which matches the entire string, and also the group, which is the blank. So when given "I had pizza for lunch today" it produces two matches: "I had pizza for lunch today" and "pizza" Is there any way to only match the blank? Is there any way to just get "pizza"? Or at least get "pizza" as the first match?<issue_comment>username_1: ``` (?<=^I had )(.*?)(?= for lunch today$) ``` (?<=) and (?=) are described here: [what is the difference between ?:, ?! and ?= in regex?](https://stackoverflow.com/questions/10804732/what-is-the-difference-between-and-in-regex/10804791) => ``` (?<=^I had ): It starts with "I had " but this is not captured. (?= for lunch today$): It ends with " for lunch today" but this is not captured ``` => ``` /(?<=^I had )(.*?)(?= for lunch today$)/_/ ``` should work Or like that, if positive look behind is not supported: ``` /(^I had )(.*?)(?= for lunch today$)/$1_/ ``` => I had \_ for lunch today Upvotes: 2 <issue_comment>username_2: It doesn't use RegEx and isn't the prettiest, but this works perfectly assuming that you know the words that wrap around the string you are trying to find. ```js const testString = 'I had pizza for lunch today'; const getUnknown = (str, startWord, endWord) => str.split(startWord)[1] .split(endWord)[0] .trim(); console.log(getUnknown(testString, 'had', 'for')); ``` Upvotes: 0 <issue_comment>username_3: You constructed a capturing group; use it. When you create a regular expression with something wrapped in parentheses, in your case `(.*?)`, you have created a [capturing group](https://regexone.com/lesson/capturing_groups). Whatever is matched by capturing groups will be returned alongside the full match. [`String.prototype.match()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/match) returns an array. Its first element is the full match. All subsequent elements are captured results. Since you only have one capturing group, the result is `matches[1]`. ```js var sentences = [ "I had pizza for lunch today", "I had hamburger for lunch today", "I had coconut for lunch today", "I had nothing for lunch today" ]; sentences.map(function(sentence) { console.log(sentence + " -> " + sentence.match(/^I had (.*?) for lunch today$/)[1]) }) ``` Upvotes: 3 [selected_answer]
2018/03/14
1,387
4,501
<issue_start>username_0: Here is a snippet of Java code that has really baffled me over the past couple of days. The goal is to insert only one line of code in the given place such that the number printed after "Given: " is 5050. I do not want to write multiple lines or change any of the existing lines of code. ``` public static void main(String args[]) { for(int x = 1; x <= 100; x++) { // In one line, write code such that 5050 is printed out. } System.out.println("Given: " + x); System.out.println("Expected: 5050"); } ``` I know that 5050 is the sum of the first 100 natural numbers, and this is evident in the for loop, which sets x to each of these consecutive numbers during each occurrence. If I could find a way to add the values of x to each other, that could have been a solution. The problem is that I want the value of x to be 5050 when I exit the loop, so that the "Given: " line prints out 5050 as the value of x. I also know that I can use another variable to store the temporary value of the sum, i.e. `y += x;`, however, this would be impossible since I wouldn't be able to declare y multiple times within the loop, and the value of x needs to be 5050, not y. Also, if I try `x += x`, the result will definitely not be 5050 because of the way the variable is being changed by both the for loop execution and the addition operation. So, is there actually a solution to this problem?<issue_comment>username_1: You have to make two changes. First, you **must** make `x` visible outside the `for` loop. Otherwise there is literally **no** way to access it after the loop. Then, all you have to do is set `x` to the desired value (minus one), which will terminate the loop after the value is incremented and tested. Like, ``` int x; for (x = 1; x <= 100; x++) { x = 5050 - 1; } System.out.println("Given: " + x); System.out.println("Expected: 5050"); ``` Outputs ``` Given: 5050 Expected: 5050 ``` The **only *other*** legal way to write it is like ``` for (int x = 1; x <= 100; x++) { } int x = 5050; { } System.out.println("Given: " + x); System.out.println("Expected: 5050"); ``` which isn't "really" kosher in my opinion. Note that we terminate the loop, add a new `x` variable and an empty block in that one line. Upvotes: 4 [selected_answer]<issue_comment>username_2: You can close the parentheses of the `for`-loop in this line, and introduce a new variable `x` in the same line: ``` public static void main(String args[]) { for(int x = 1; x <= 100; x++) { }; String x = "5050"; { } System.out.println("Given: " + x); System.out.println("Expected: 5050"); } ``` Greetings from Bobby Tables... --- **EDIT**: As @ElliottFrish has pointed out, the following trick with `System.exit(0)` after first loop iteration does not work, because there is still no `x` in scope: ``` // Doesn't work. public static void main(String args[]) { for(int x = 1; x <= 100; x++) { System.out.println("Given: 5050"); System.out.println("Expected: 5050"); System.exit(0); } System.out.println("Given: " + x); System.out.println("Expected: 5050"); } ``` However, we can force this `System.exit(0);`-solution to compile by moving the given `System.out.prinln`s into an unrelated method: ``` class BobbyForloops { public static void main(String args[]) { for(int x = 1; x <= 100; x++) { System.out.println("Given: 5050\nExpected: 5050"); System.exit(0); }} public static void unrelated(int x) {{ } System.out.println("Given: " + x); System.out.println("Expected: 5050"); } } ``` Now it again compiles and outputs what was asked. But it's just a variation of the first solution. **Edit:** Thanks @Dukeling for proposing a much shorter solution that uses `System.exit(0);`. @Dukeling's solution is actually even shorter, because it uses a `break` instead of `System.exit(0)`. Upvotes: 2 <issue_comment>username_3: The comment within the code doesn't say **where** the one line has to be placed, although your post suggested it needs to replace the comment. Taking the comment literally though, this works: ``` public class X { private static final String x = "5050"; public static void main(String args[]) { for(int x = 1; x <= 100; x++) { // In one line, write code such that 5050 is printed out. } System.out.println("Given: " + x); System.out.println("Expected: 5050"); } } ``` Upvotes: 1
2018/03/14
404
1,296
<issue_start>username_0: From this string: ``` dfasd {{test}} asdhfj {{te{st2}} asdfasd {{te}st3}} ``` I would like to get the following substrings: ``` test, te{st2, te}st3 ``` In other words I want keep everything inside double curly braces **including single curly braces**. I can't use this pattern: ``` {{(.*)}} ``` because it matches the whole thing between first {{ and last }}: ``` test}} asdhfj {{te{st2}} asdfasd {{te}st3 ``` I managed to get the first two with this regex pattern: ``` {{([^}]*)}} ``` Is there any way to get all three using regex?<issue_comment>username_1: Try `{{(.*?)}}`. `.*?` means to do a lazy / non greedy search => as soon as }} matches, it will capture the found text and stop looking. Otherwise it will do a greedy search and therefore start with the first {{ and end with the very last }}. Upvotes: 5 [selected_answer]<issue_comment>username_2: This isn't that pretty, but it doesn't use RegEx and makes it clear what you are trying to accomplish. ```js const testString = 'dfasd {{test}} asdhfj {{te{st2}} asdfasd {{te}st3}}'; const getInsideDoubleCurly = (str) => str.split('{{') .filter(val => val.includes('}}')) .map(val => val.substring(0, val.indexOf('}}'))); console.log(getInsideDoubleCurly(testString)); ``` Upvotes: 2
2018/03/14
348
1,257
<issue_start>username_0: I need to update Java version ``` java version "1.4.2" ``` I don't want change version of my jdk, so if I update java, will JDK update too? EDIT @UnholySheep > > You mean you want to update the JRE (runtime environment)? That is > indeed separate from the JDK > > ><issue_comment>username_1: `java -version` will update only if you point the java environment variable to the new JDK/JRE installation. You can have multiple JDK installations and switch between them whenever you like. For the current JDK in use you just have to set (On Windows) the JAVA\_HOME environment variable to point to the correct JDK folder you intend to use at the moment. Of course, this is only helpful for use in the command line. If you are using an IDE it's even simpler, you can choose/switch the JDK you want to use on every Java Project from the Project Settings. For Linux, you can change the current JDK in use like this: ``` export $JAVA_HOME=/urs/lib/jvm/jdk1.8.0_144/jre export PATH=$JAVA_HOME/bin:$PATH sudo update-alternatives --config java ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: if you want to upgrade your Java version in your Linux system then you can go with ``` sudo upgrade java ``` Upvotes: 0
2018/03/14
3,202
10,183
<issue_start>username_0: I see the use of [flip-flop](https://docs.perl6.org/language/operators#index-entry-Flipflop_operator) in doc.perl6.org, see the code below : ``` my $excerpt = q:to/END/; Here's some unimportant text. =begin code This code block is what we're after. We'll use 'ff' to get it. =end code More unimportant text. =begin code I want this line. and this line as well. HaHa =end code More unimport text. =begin code Let's to go home. =end code END my @codelines = gather for $excerpt.lines { take $_ if "=begin code" ff "=end code" } # this will print four lines, starting with "=begin code" and ending with # "=end code" .say for @codelines; =begin code This code block is what we're after. We'll use 'ff' to get it. =end code =begin code I want this line. and this line as well. HaHa =end code =begin code Let's to go home. =end code ``` I want to save the lines between `=begin code` and `=end code` into separate arrays, such as this: ``` ['This code block is what we're after.', 'We'll use 'ff' to get it.'] ['I want this line.', 'and this line as well.', 'HaHa'] ['Let's to go home.'] ``` I know grammar can do this, but I want to know if there is a beter way?<issue_comment>username_1: You need to specify that you want the matched values to not be included. You do this by adding `^` to the side of the operator you want excluded. In this case it is both sides of the operator. You also need to collect the values up to have the grouped together. The simplest way in this case is to put that off between matches. (If you wanted the endpoints included it would require more thought to get it right) ```perl6 my @codelines = gather { my @current; for $excerpt.lines { if "=begin code" ^ff^ "=end code" { # collect the values between matches push @current, $_; } else { # take the next value between matches # don't bother if there wasn't any values matched if @current { # you must do something so that you aren't # returning the same instance of the array take @current.List; @current = (); } } } } ``` If you need the result to be an array of arrays (mutable). ```perl6 if @current { take @current; @current := []; # bind it to a new array } ``` --- An alternative would be to use `do for` with sequences that share the same iterator. This works because `for` is more eager than `map` would be. ```perl6 my $iterator = $excerpt.lines.iterator; my @codelines = do for Seq.new($iterator) { when "=begin code" { do for Seq.new($iterator) { last when "=end code"; $_<> # make sure it is decontainerized } } # add this because `when` will return False if it doesn't match default { Empty } } ``` `map` takes one sequence and turns it into another, but doesn't do anything until you try to get the next value from the sequence. `for` starts iterating immediately, only stopping when you tell it to. So `map` would cause race conditions even when running on a single thread, but `for` won't. Upvotes: 4 [selected_answer]<issue_comment>username_2: You can also use good old regular expressions: ``` say ( $excerpt ~~ m:s:g{\=begin code\s+(.+?)\s+\=end code} ).map( *.[0] ).join("\n\n") ``` `s` for significant whitespace (not really needed), `g` for extracting all matches (not the first one), `.map` goes through the returned [Match object](https://docs.perl6.org/type/Match) and extracts the first element (it's a data structure that contains the whole matched code). This creates a List that is ultimatelly printed with every element separated by two CRs. Upvotes: 2 <issue_comment>username_3: another [answer](https://www.reddit.com/r/perl6/comments/ahfexj/idiomatic_way_to_pass_around_a_seq_or_iterator/) in reddit by [bobthecimmerian](https://www.reddit.com/user/bobthecimmerian), i copy it here for completeness: ``` my $excerpt = q:to/END/; Here's some unimportant text. =begin code This code block is what we're after. We'll use 'ff' to get it. =end code More unimportant text. =begin code I want this line. and this line as well. HaHa =end code More unimport text. =begin code Let's to go home. =end code END sub doSomething(Iterator $iter) { my @lines = []; my $item := $iter.pull-one; until ($item =:= IterationEnd || $item.Str ~~ / '=end code' /) { @lines.push($item); $item := $iter.pull-one; } say "Got @lines[]"; } my Iterator $iter = $excerpt.lines.iterator; my $item := $iter.pull-one; until ($item =:= IterationEnd) { if ($item.Str ~~ / '=begin code' /) { doSomething($iter); } $item := $iter.pull-one; } ``` the output is: ``` Got This code block is what we're after. We'll use 'ff' to get it. Got I want this line. and this line as well. HaHa Got Let's to go home. ``` Upvotes: 2 <issue_comment>username_3: use grammar: ``` #use Grammar::Tracer; #use Grammar::Debugger; my $excerpt = q:to/END/; Here's some unimportant text. =begin code This code block is what we're after. We'll use 'ff' to get it. =end code More unimportant text. =begin code I want this line. and this line as well. HaHa =end code More unimport text. =begin code Let's to go home. =end code END grammar ExtractSection { rule TOP { ^ + %% <.comment> $ } token section { + % <.ws> } token line { !before <comment> \N+ \n } token comment { ['=begin code' | '=end code' ] \n } } class ExtractSectionAction { method TOP($/) { make $/.values».ast } method section($/) { make ~$/.trim } method line($/) { make ~$/.trim } method comment($/) { make Empty } } my $em = ExtractSection.parse($excerpt, :actions(ExtractSectionAction)).ast; for @$em -> $line { say $line; say '-' x 35; } ``` Output: ``` Here's some unimportant text. ----------------------------------- This code block is what we're after. We'll use 'ff' to get it. ----------------------------------- More unimportant text. ----------------------------------- I want this line. and this line as well. HaHa ----------------------------------- More unimport text. ----------------------------------- Let's to go home. ----------------------------------- ``` But it included unrelated lines, based on @username_1's solution, I updated the above answer as follows(and thanks again): ``` #use Grammar::Tracer; #use Grammar::Debugger; my $excerpt = q:to/END/; Here's some unimportant text. =begin code This code block is what we're after. We'll use 'ff' to get it. =end code More unimportant text. =begin code I want this line. and this line as well. HaHa =end code More unimport text. =begin code Let's to go home. =end code END grammar ExtractSection { token start { ^^ '=begin code' \n } token finish { ^^ '=end code' \n } token line { ^^ \N+)> \n } token section { ~ +? } token comment { ^^\N+ \n } token TOP { [ || ]+ } } class ExtractSectionAction { method TOP($/) { make @».ast.List } method section($/) { make ~«@.List } method line($/) { make ~$/.trim } method comment($/) { make Empty } } my $em = ExtractSection.parse($excerpt, :actions(ExtractSectionAction)).ast; for @$em -> $line { say $line.perl; say '-' x 35; } ``` Output is: ``` $("This code block is what we're after.", "We'll use 'ff' to get it.") ----------------------------------- $("I want this line.", "and this line as well.", "HaHa") ----------------------------------- $("Let's to go home.",) ----------------------------------- ``` So it works as expected. Upvotes: 2 <issue_comment>username_3: It could be another solution, use [rotor](https://docs.perl6.org/routine/rotor) ``` my $excerpt = q:to/END/; Here's some unimportant text. =begin code This code block is what we're after. We'll use 'ff' to get it. =end code More unimportant text. =begin code I want this line. and this line as well. HaHa =end code More unimport text. =begin code Let's to go home. =end code END my @sections = gather for $excerpt.lines -> $line { if $line ~~ /'=begin code'/ ff $line ~~ /'end code'/ { take $line.trim; } } my @idx = # gather take the indices of every `=begin code` and `=end code` gather for @sections.kv -> $k, $v { if $v ~~ /'=begin code'/ or $v ~~ /'end code'/ { take $k; } } my @r = # gather take the lines except every line of `=begin code` and `=end code` gather for @sections.kv -> $k, $v { if $v !~~ /'=begin code' | '=end code'/ { take $v; } } my @counts = @idx.rotor(2)».minmax».elems »-» 2; say @r.rotor(|@counts).perl; ``` Output: ``` (("This code block is what we're after.", "We'll use 'ff' to get it."), ("I want this line.", "and this line as well.", "HaHa"), ("Let's to go home.",)).Seq ``` Upvotes: 1 <issue_comment>username_3: Another answer: ``` my $excerpt = q:to/END/; Here's some unimportant text. =begin code This code block is what we're after. We'll use 'ff' to get it. =end code More unimportant text. =begin code I want this line. and this line as well. HaHa =end code More unimport text. =begin code Let's to go home. =end code END for $excerpt.comb(/'=begin code' \s* <( .+? )> \s+ '=end code' /) -> $c { say $c; say '-' x 15; } ``` Upvotes: 0 <issue_comment>username_3: use the [comb](https://docs.raku.org/routine/comb) operator: ``` my $str = q:to/EOS/; Here's some unimportant text. =begin code This code block is what we're after. We'll use 'ff' to get it. =end code More unimportant text. =begin code I want this line. and this line as well. HaHa. =end code More unimport text. =begin code Let's go home. =end code EOS my token separator { '=begin code' \n | '=end code' \n } my token lines { [ .]+ } say $str.comb( / # match lines that not start with # =begin code or =end code # match lines that start with # =begin code or =end code <( # start capture + # match lines between # =begin code and =end code )> # end capture # match lines that start with # =begin code or =end code /).raku; ``` Output: ``` ("This code block is what we're after.\nWe'll use 'ff' to get it.\n", "I want this line.\nand this line as well.\nHaHa.\n", "Let's go home.\n").Seq ``` Upvotes: 0
2018/03/14
957
2,649
<issue_start>username_0: I'm trying to delete an attribute and its value from a hash. Its seems simple based on answers I see on here but it doesn't appear to work for me. Curious if anyone has any thoughts as to why? Also... this is NOT a duplicate of the question that was linked. I have tried except and slice... neither of those work as well. I'm guessing my dataset it different. Here is an example hash I have: ``` {:data=>[{:id=>1, :make=>"Ford", :model=>"Excursion", :year=>2018, :color=>"silver", :vin=>"123456789F22"},{=>[{:id=>2, :make=>"Mazda", :model=>"RX7", :year=>1980, :color=>"blue", :vin=>"123456789F22"},{=>[{:id=>3, :make=>"Chevy", :model=>"Dorado", :year=>2018, :color=>"white", :vin=>"123456789F22"}]} ``` I have tried the following: ``` hashval.delete("color") hashval.except!("color") hashval.each {|h| h.delete("color")} ``` I also tried :color in case the string format was wrong ``` hashval.delete(:color) hashval.except!(:color) hashval.each {|h| h.delete(:color)} ``` but when I try to display the resulting hash ``` logger.info "hash result: #{hashval}" ``` I still see the original hash with the color still in there. Any thoughts on what I am doing wrong? Ok... more info! If I do this: ``` hashval.delete(:data) ``` It does delete :data (and everything else after that). So it has something to do with the attributes in that hash array? As it turns out, the answer is: ``` hashval = { data: vehicles.map { |v| v.table_data.except(:color) } } ``` I guess this issue was marked closed as a duplicate (even though it wasn't) so I cant add the solution.<issue_comment>username_1: You keys are symbols so, `hash.delete(:color)` should work: ``` h = {:id=>1, :make=>"Ford", :model=>"Excursion", :year=>2018, :color=>"silver", :vin=>"123456789F22"} h.key?(:color) # => true h.delete(:color) h.key?(:color) # => false h # => {:id=>1, :make=>"Ford", :model=>"Excursion", :year=>2018, :vin=>"123456789F22"} ``` Also `hash` might be a reserved word since if I open irb or console and type `hash` I get back an integer. I have no idea what it is, but it makes me think `hash` shouldn't be used as a var. Upvotes: 2 <issue_comment>username_2: `hash#delete` works if you use a symbol: ``` irb irb(main):001:0> hash = {:id=>1, :make=>"Ford", :model=>"Excursion", :year=>2018, :color=>"silver", :vin=>"123456789F22"} => {:id=>1, :make=>"Ford", :model=>"Excursion", :year=>2018, :color=>"silver", :vin=>"123456789F22"} irb(main):002:0> hash.delete(:color) => "silver" irb(main):003:0> hash => {:id=>1, :make=>"Ford", :model=>"Excursion", :year=>2018, :vin=>"123456789F22"} ``` Upvotes: 0
2018/03/14
585
1,484
<issue_start>username_0: I find `/*` and `*` has different means when I compare with the setting: ``` * !/init.el !/README.md !.gitignore !/lib/ ``` VS ``` /* !/init.el !/README.md !.gitignore !/lib/ ``` The former not track `/lib/` but the latter track it. How to undertand `*` and `/*`? **UPDATE** I think this question is not exactly same as another one. Because he is focus on `dir/**` and `dir/*` but this one is focus on `/*` and `*`.<issue_comment>username_1: You keys are symbols so, `hash.delete(:color)` should work: ``` h = {:id=>1, :make=>"Ford", :model=>"Excursion", :year=>2018, :color=>"silver", :vin=>"123456789F22"} h.key?(:color) # => true h.delete(:color) h.key?(:color) # => false h # => {:id=>1, :make=>"Ford", :model=>"Excursion", :year=>2018, :vin=>"123456789F22"} ``` Also `hash` might be a reserved word since if I open irb or console and type `hash` I get back an integer. I have no idea what it is, but it makes me think `hash` shouldn't be used as a var. Upvotes: 2 <issue_comment>username_2: `hash#delete` works if you use a symbol: ``` irb irb(main):001:0> hash = {:id=>1, :make=>"Ford", :model=>"Excursion", :year=>2018, :color=>"silver", :vin=>"123456789F22"} => {:id=>1, :make=>"Ford", :model=>"Excursion", :year=>2018, :color=>"silver", :vin=>"123456789F22"} irb(main):002:0> hash.delete(:color) => "silver" irb(main):003:0> hash => {:id=>1, :make=>"Ford", :model=>"Excursion", :year=>2018, :vin=>"123456789F22"} ``` Upvotes: 0
2018/03/14
3,113
10,572
<issue_start>username_0: So I installed android studio 3.0.1 and as soon as it opened the gradle built and showed the following errors. I tried adding dependencies such as design and support but in vain. Could someone help me? [![enter image description here](https://i.stack.imgur.com/HQnI6.png)](https://i.stack.imgur.com/HQnI6.png) It shows that some attributes such as `dialogCornerRadius` and `fontVariation` Settings not found.<issue_comment>username_1: Check your dependencies for uses of `+` in the versions. Some dependency could be using `com.android.support:appcompat-v7:+`. This leads to problems when a new version gets released and could break features. The solution for this would be to either use `com.android.support:appcompat-v7:{compileSdkVersion}.+` or don't use `+` at all and use the full version (ex. `com.android.support:appcompat-v7:26.1.0`). If you cannot see a line in your build.gradle files for this, run in android studio terminal to give an overview of what each dependency uses `gradlew -q dependencies app:dependencies --configuration debugAndroidTestCompileClasspath` (include androidtest dependencies) OR `gradlew -q dependencies app:dependencies --configuration debugCompileClasspath` (regular dependencies for debug) which results in something that looks close to this ``` ------------------------------------------------------------ Project :app ------------------------------------------------------------ debugCompileClasspath - Resolved configuration for compilation for variant: debug ... +--- com.android.support:appcompat-v7:26.1.0 | +--- com.android.support:support-annotations:26.1.0 | +--- com.android.support:support-v4:26.1.0 (*) | +--- com.android.support:support-vector-drawable:26.1.0 | | +--- com.android.support:support-annotations:26.1.0 | | \--- com.android.support:support-compat:26.1.0 (*) | \--- com.android.support:animated-vector-drawable:26.1.0 | +--- com.android.support:support-vector-drawable:26.1.0 (*) | \--- com.android.support:support-core-ui:26.1.0 (*) +--- com.android.support:design:26.1.0 | +--- com.android.support:support-v4:26.1.0 (*) | +--- com.android.support:appcompat-v7:26.1.0 (*) | +--- com.android.support:recyclerview-v7:26.1.0 | | +--- com.android.support:support-annotations:26.1.0 | | +--- com.android.support:support-compat:26.1.0 (*) | | \--- com.android.support:support-core-ui:26.1.0 (*) | \--- com.android.support:transition:26.1.0 | +--- com.android.support:support-annotations:26.1.0 | \--- com.android.support:support-v4:26.1.0 (*) +--- com.android.support.constraint:constraint-layout:1.0.2 | \--- com.android.support.constraint:constraint-layout-solver:1.0.2 (*) - dependencies omitted (listed previously) ``` If you have no control over changing the version, Try forcing it to use a specific version. ``` configurations.all { resolutionStrategy { force "com.android.support:appcompat-v7:26.1.0" force "com.android.support:support-v4:26.1.0" } } ``` The force dependency may need to be different depending on what is being set to 28.0.0 Upvotes: 4 <issue_comment>username_2: The dependencies must be applied as shown below to solve this issue : ``` dependencies { implementation fileTree(include: ['*.jar'], dir: 'libs') implementation 'com.android.support.constraint:constraint-layout:1.0.2' testImplementation 'junit:junit:4.12' androidTestImplementation 'com.android.support.test:runner:1.0.1' androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.1' implementation 'com.android.support:support-v4:27.1.0' implementation 'com.android.support:appcompat-v7:27.1.0' implementation 'com.android.support:recyclerview-v7:27.1.0' } ``` Please do not use the version of : > > v7:28.0.0-alpha1 > > > Upvotes: 1 <issue_comment>username_3: I had the exact same issue. The following thread helped me solve it. Just set your Compile SDK version to Android P. <https://stackoverflow.com/a/49172361/1542720> > > I fixed this issue by selecting: > > > > > > > API 27+: Android API 27, P preview (Preview) > > > > > > > > > in the project structure settings. the following image shows my > settings. The 13 errors that were coming while building the app, have > disappeared. > > > [![Gradle settings](https://i.stack.imgur.com/0YqC3.png)](https://i.stack.imgur.com/0YqC3.png) > > > Upvotes: 5 <issue_comment>username_4: This error occurs because of mismatched `compileSdkVersion` and library version. for example: ``` compileSdkVersion 27 implementation 'com.android.support:appcompat-v7:26.1.0' implementation 'com.android.support:design:26.1.0' ``` and also avoid to use + sign with library as in the following: ``` implementation 'com.android.support:appcompat-v7:26.+' ``` use exact library version like this ``` implementation 'com.android.support:appcompat-v7:26.1.0' ``` Using + sign with the library makes it difficult for the building process to gather the exact version that is required, making system unstable, hence should be discouraged. Upvotes: 8 <issue_comment>username_5: Maybe it's too late but i found a solution: You have to edit in the `build.gradle` either the `compileSdkVersion` --> to lastest (now it is 28). Like that: ``` android { compileSdkVersion 28 defaultConfig { applicationId "NAME_OF_YOUR_PROJECT_DIRECTORY" minSdkVersion 21 targetSdkVersion 28 versionCode 1 versionName "1.0" testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" } ``` or you can change the version of implementation: ``` dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) api 'com.android.support:design:27.+' implementation 'com.android.support:appcompat-v7:27.1.1' implementation 'com.android.support.constraint:constraint-layout:1.1.2' testImplementation 'junit:junit:4.12' androidTestImplementation 'com.android.support.test:runner:1.0.2' androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2' } ``` Upvotes: 4 <issue_comment>username_6: In my case, this error ocurred while i was using the ``` implementation 'com.android.support:appcompat-v7:+' implementation 'com.android.support:design:+' ``` libraries together with googles ``` implementation 'com.google.android.material:material-components:+' ``` library. If this is the case in your project, i highly recommend to fully remove the google material components library from your project. Upvotes: 0 <issue_comment>username_7: If you are migrated for AndroidX and getting this error, you need to set the compile SDK to Android 9.0 (API level 28) or higher Upvotes: 6 <issue_comment>username_8: **This Is Because compileSdkVersion , buildToolsVersion and Dependecies implementations are not match You Have to done like this i have 28 library then** ``` compileSdkVersion 28 targetSdkVersion 28 buildToolsVersion 28.0.3 implementation 'com.android.support:design:28.0.0' implementation 'com.android.support:appcompat-v7:28.0.0' ``` If we You Use Any where less than 28 this error should occured so please try match library in all. Upvotes: 3 <issue_comment>username_9: ``` buildscript { project.ext { supportLibVersion = '27.1.1' compileVersion = 28 minSupportedVersion = 22 } } ``` and set dependencies: ``` implementation "com.android.support:appcompat-v7:$project.supportLibVersion" ``` Upvotes: 0 <issue_comment>username_10: I was having the same issue while adding a mapbox navigation API and resolved this issue by going to: file>project Structure and then setting the compile sdk version and build tool version to the latest. And here is the screenshot: [![settings Screenshot](https://i.stack.imgur.com/CyLLj.png)](https://i.stack.imgur.com/CyLLj.png) Hope it helps. Upvotes: 3 <issue_comment>username_11: try to change the compileSdkVersion to: `compileSdkVersion 28` Upvotes: 3 <issue_comment>username_12: Found a neat plugin to solve this: cordova-android-support-gradle-release cordova plugin add cordova-android-support-gradle-release --variable ANDROID\_SUPPORT\_VERSION=27.+ --save Upvotes: 2 <issue_comment>username_13: I faced the same problem but I successfully solved the problem by changing the version of compileSdkVersion to the latest which is 29 and change the version of targetSdkVersion to the latest which is 29. Go to the gradile.build file and change the compilesdkversion and targetsdkversion. Upvotes: 1 <issue_comment>username_14: Had the same issue while working on an application with several modules, check to make sure as you increase the `compileSdkVersion` and `targetSdkVersion` to 28+ values in a module you also do for the others. A module was running on `compileSdkVersion 29` and `targetSdkVersion 29` while a second module of the application was running on `compileSdkVersion 27` and `targetSdkVersion 27`. Changing the second module to also compile for and target SDK version 29 resolved my issue. Hope this helps someone. Upvotes: 4 <issue_comment>username_15: in my case, I try tried `File`>`Invalidate Cache/Restart` and that works for me. Upvotes: 2 <issue_comment>username_16: In my case, I was getting this error in AndroidStudio 4.1.1 while updating compileSdkVersion to 29. If you are having dependent project in build.gradle, All you need to do is Update same compileSdkVersion in dependent project's build.gradle as well. Steps: 1. Click on your app folder in Project view of AndroidStudio. 2. Select Open Module Settings. 3. In Project Structure >> Check how many modules are there? 4. If there are more than one modules, you will have to update compileSdkVersion, buildToolsVersion & Project dependency as well. That worked for me :) Upvotes: 3 <issue_comment>username_17: change useAndroidX and enableJetifier to false in the file gradle.properties and remove androidx in Dependecies in my case Upvotes: 1 <issue_comment>username_18: First, make sure your min compileSdkVersion is 29 Second, You need to match the compileSdkVersion and the compatible buildToolsVersion. You can refer the following answer for more details: <https://stackoverflow.com/a/71663728/9420335> Upvotes: 0 <issue_comment>username_19: compileSdkVersion 28, changing compileSdkVersion to 28, solved the problem. Upvotes: 1 <issue_comment>username_20: I fixed this error by adding this to build.gradle (:app) in the android section compileOptions { sourceCompatibility 1.8 targetCompatibility 1.8 } Upvotes: 0
2018/03/14
747
2,575
<issue_start>username_0: I have a factory that creates buttons, ``` var btnFactory = (fn, text) => { var btn = $(`${text}`); btn.bind("click", fn); return btn; }; ``` I want to be able insert multiple buttons, events already bound to handlers, into an element so I end up with, ``` Button1 Button2 ``` I'm trying to figure out to use `.html()` for it, but so far it's eluded me.<issue_comment>username_1: I think what you're asking is how to use this function to generate the button? I put a couple different ways to do that in the snippet below: ```js var btnFactory = (fn, text) => { var btn = $(`${text}`); btn.bind("click", fn); return btn; }; // method 1 $('body').html( btnFactory( (function () { console.log('test 1') }), 'test 1' ) ) // method 2 $('body').append( btnFactory( (function () { console.log('test 2'); }), 'test 2' ) ) ``` ```html ``` Upvotes: 1 <issue_comment>username_2: You don't need jQuery (and it's more efficient) ``` // reusable template element for cloning const btnTemplate = (() => { const bt = document.createElement("button") bt.type = "button" // other things you want all buttons to have, classname, etc. return bt })() const btnFactory = { fn, text } => { const btn = btnTemplate.cloneNode(false) btn.onclick = fn btn.innerHTML = text return btn } ``` Can be used like ``` const items = [ { text: "Button1", fn: e => console.log("Button1 clicked") }, { text: "Button2", fn: e => console.log("Button2 clicked") } ] // Higher-order helper to fold a collection and a factory into // a documentFragment const intoDocFrag = (factoryFn, xs) => xs.reduce((frag, x) => { frag.appendChild(factoryFn(x)) return frag }, document.createDocumentFragment()) document.body.appendChild(intoDocFrag(btnFactory, items)) ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: If you mean inserting a series of buttons with a for loop, then it's quite simple. You need to give the `div` element an ID, and create an variable like so: `var divElement = document.getElementById('divElement1');`. Then you create a for loop, and insert the amount of buttons like so: ``` var docFrag = document.createDocumentFragment() for (var i = 1; i < (amount of buttons you want); i++) { var button = document.createElement("button"); button.addEventListener("click", fn); button.value = "Button" + i; docFrag.appendChild(button); } divElement.appendChild(docFrag); ``` Hope this helps! Upvotes: 1
2018/03/14
1,694
4,984
<issue_start>username_0: I am trying to run the example in [this tutorial](https://docs.aws.amazon.com/iot/latest/developerguide/iot-embedded-c-sdk.html) from AWS IoT (AWS IoT Embedded C SDK). My `aws_iot_config.h` file has the following configuration: ``` #define AWS_IOT_MQTT_HOST "XXXXXXX.iot.us-east-2.amazonaws.com" ///< Customer specific MQTT HOST. The same will be used for Thing Shadow #define AWS_IOT_MQTT_PORT 8883 ///< default port for MQTT/S #define AWS_IOT_MQTT_CLIENT_ID "c-sdk-client-id" ///< MQTT client ID should be unique for every device #define AWS_IOT_MY_THING_NAME "SM1" ///< Thing Name of the Shadow this device is associated with #define AWS_IOT_ROOT_CA_FILENAME "iotRootCA.pem" ///< Root CA file name #define AWS_IOT_CERTIFICATE_FILENAME "deviceCert.crt" ///< device signed certificate file name #define AWS_IOT_PRIVATE_KEY_FILENAME "deviceCert.key" ///< Device private key filename ``` This is how my policies are: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "iot:Connect", "Resource": "*" }, { "Effect": "Allow", "Action": "iot:Publish", "Resource": "arn:aws:iot:us-east-2:338639570104:topic/sm1" }, { "Effect": "Allow", "Action": "iot:Subscribe", "Resource": "arn:aws:iot:us-east-2:338639570104:topic/sm1" } ] } ``` When I run the `subscribe_publish_sample` example, I am getting the following error: > > DEBUG: iot\_tls\_connect L#236 ok > > [ Protocol is TLSv1.2 ] > > [ Ciphersuite is TLS-ECDHE-ECDSA-WITH-AES-256-GCM-SHA384 ] > > > DEBUG: iot\_tls\_connect L#238 [ Record expansion is 29 ] > > > DEBUG: iot\_tls\_connect L#243 . Verifying peer X.509 certificate... > > DEBUG: iot\_tls\_connect L#252 ok > > > DEBUG: iot\_tls\_connect L#262 . Peer certificate information ... > > > DEBUG: iot\_tls\_connect L#264 cert. version : 3 > > serial number : 3C:75:FE:30:01:DD:A3:B9:EF:72:DC:F6:7A:5C:A2:54 > > issuer name : C=US, O=Symantec Corporation, OU=Symantec Trust Network, CN=Symantec Class 3 ECC 256 bit SSL CA - G2 > subject name : C=US, ST=Washington, L=Seattle, O=Amazon.com, Inc., CN=\*.iot.us-east-2.amazonaws.com > > issued on : 2017-10-12 00:00:00 > > expires on : 2018-10-13 23:59:59 > > signed using : ECDSA with SHA256 > > EC key size : 256 bits > > basic constraints : CA=false > > subject alt name : iot.us-east-2.amazonaws.com, \*.iot.us-east-2.amazonaws.com > > key usage : Digital Signature > > ext key usage : TLS Web Server Authentication, TLS Web Client Authentication > > > Subscribing... > > ERROR: main L#206 Error subscribing : -28 > > > Can anyone show me what is happening? Am I missing something?<issue_comment>username_1: I think what you're asking is how to use this function to generate the button? I put a couple different ways to do that in the snippet below: ```js var btnFactory = (fn, text) => { var btn = $(`${text}`); btn.bind("click", fn); return btn; }; // method 1 $('body').html( btnFactory( (function () { console.log('test 1') }), 'test 1' ) ) // method 2 $('body').append( btnFactory( (function () { console.log('test 2'); }), 'test 2' ) ) ``` ```html ``` Upvotes: 1 <issue_comment>username_2: You don't need jQuery (and it's more efficient) ``` // reusable template element for cloning const btnTemplate = (() => { const bt = document.createElement("button") bt.type = "button" // other things you want all buttons to have, classname, etc. return bt })() const btnFactory = { fn, text } => { const btn = btnTemplate.cloneNode(false) btn.onclick = fn btn.innerHTML = text return btn } ``` Can be used like ``` const items = [ { text: "Button1", fn: e => console.log("Button1 clicked") }, { text: "Button2", fn: e => console.log("Button2 clicked") } ] // Higher-order helper to fold a collection and a factory into // a documentFragment const intoDocFrag = (factoryFn, xs) => xs.reduce((frag, x) => { frag.appendChild(factoryFn(x)) return frag }, document.createDocumentFragment()) document.body.appendChild(intoDocFrag(btnFactory, items)) ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: If you mean inserting a series of buttons with a for loop, then it's quite simple. You need to give the `div` element an ID, and create an variable like so: `var divElement = document.getElementById('divElement1');`. Then you create a for loop, and insert the amount of buttons like so: ``` var docFrag = document.createDocumentFragment() for (var i = 1; i < (amount of buttons you want); i++) { var button = document.createElement("button"); button.addEventListener("click", fn); button.value = "Button" + i; docFrag.appendChild(button); } divElement.appendChild(docFrag); ``` Hope this helps! Upvotes: 1
2018/03/14
993
3,013
<issue_start>username_0: In the below code, the compiler can't figure out which constructor I want to use. Why, and how do I fix this? ([Live example](http://coliru.stacked-crooked.com/a/77f21f7dd5c964b3)) ``` #include #include #include template class A { public: using a\_type = std::tuple; using b\_type = std::tuple; inline explicit constexpr A(const std::function& Initializer, const std::function& Evaluator, const Type1& elem1, const Type2& elem2) { std::cout << "idx\_type" << std::endl; } inline explicit constexpr A(const std::function& Initializer, const std::function& Evaluator, const Type1& elem1, const Type2& elem2) { std::cout << "point\_type" << std::endl; } }; int main() { int a = 1; long long b = 2; auto c = A{ [](std::tuple p)->double { return 1.0\*std::get<0>(p) / std::get<1>(p); }, [](double d)->double { return d; }, b,a }; return 0; } ```<issue_comment>username_1: The reason it doesn't work is because a lambda is not a `std::function` and so the compiler tries to create one using [the fifth](http://en.cppreference.com/w/cpp/utility/functional/function/function) overload of the constructor. The problem is that both of your `A` constructors can be used because of this conversion and the reason that `std::tuple` and `std::tuple` are constructible from each other makes this ambigious for the compiler what constructor to pick. What you could do is explicitly cast to the desired `std::function` (MCVE of @PasserBy used in comments), [like this](http://coliru.stacked-crooked.com/a/abe43f9a5961c15e): ``` #include #include #include template class A { public: using a\_type = std::tuple; using b\_type = std::tuple; A(const std::function&) { std::cout << "idx\_type" << std::endl; } A(const std::function&) { std::cout << "point\_type" << std::endl; } }; int main() { std::function)> func = [](auto p) -> double { return 1; }; auto c = A{ func }; } ``` Upvotes: 2 <issue_comment>username_2: As @SombreroChicken mentioned, `std::function` has a constructor that allows *any* callable object `c` to initialize it, as long as `c(Args...)` is valid and returns something convertible to `R`. To fix it, you may use some SFINAE machinery ``` #include #include #include #include template class A { template struct tag { operator T(); }; public: using a\_type = std::tuple; using b\_type = std::tuple; template>>\* = nullptr> A(C&& initializer) { std::cout << "size\_t" << std::endl; } template>>\* = nullptr> A(C&& initializer) { std::cout << "other" << std::endl; } }; int main() { auto c = A{ [](std::tuple p) -> double { return 1; } }; auto c2 = A{ [](std::tuple) -> double { return 2; } }; } ``` [Live](http://coliru.stacked-crooked.com/a/7931097ecbb0bbe5) Here, we turn off the constructor if the callable can be called with `b_type` or `a_type` respectively. The extra indirection through `tag` is there to disable the conversion between tuples of different types Upvotes: 2 [selected_answer]
2018/03/14
305
1,189
<issue_start>username_0: To be more specific, I want to modify some functions in `scikit-learn` and import it to python. But I do not know how to work. I tried to modify .py files directly from where the sklearn stores in my local directory, but there are some files I could not open to modify, such as those with `.cp36-win_amd64`. Any advice would be helpful!<issue_comment>username_1: Delete the scikit-learn package, clone the version you are interested in from [github](https://github.com/scikit-learn/scikit-learn). Go to the directory where you've cloned it and run: ``` pip3 install -e ./ ``` This will install the package in development mode. Any changes you make will take effect the next time you run your application. Upvotes: 5 <issue_comment>username_2: Modifying source files is not a good idea...especially if you want to use the "unmodified" version later on. My advice would to: * Checkout the Scikit-learn repository on [github](https://github.com/scikit-learn/scikit-learn) * Give it a custom name (e.g. myScikitLearn) * Install it using pip install -e * All modifications made to myScikitLearn source files can then be used immediately in your code Upvotes: 3
2018/03/14
331
1,115
<issue_start>username_0: ``` php session_start(); ? Register Login and Logout User Register, Login and Logout user php mysql ========================================== php if (isset($\_SESSION['message'])){ echo "<div id='error\_msg'" .$\_SESSION['message']. ""; unset($\_SESSION['message']); } ?> Home ==== #### Welcome php echo $\_SESSION ['username']; ? [Logout](Logout.php) ``` This is my code how can I fix it > > Notice: Undefined index: username in C:\xampp\htdocs\authentication\Online Catering Reservation System.php on line 29 > > ><issue_comment>username_1: You need to test the existence of the index first: ``` php if(isset($_SESSION['username']) echo $_SESSION ['username']; ? ``` Upvotes: 1 <issue_comment>username_2: Replace these lines ``` Home ==== #### Welcome php echo $\_SESSION ['username']; ? [Logout](Logout.php) ``` with ``` php if(isset($_SESSION['username']) && !empty($_SESSION['username'])){ <h1Home #### Welcome php echo $\_SESSION ['username']; ? [Logout](Logout.php) php }else{ ? //add code to redirect to home page php } ? ``` Upvotes: 2
2018/03/14
390
1,397
<issue_start>username_0: I need to create a CSV file and export it into a folder. These are our requirements... * Folder name needs to be the Office 365 tenant name. * CSV file name needs to be tenant name. * Tenant name needs to be the bit before ".onmicrosoft.com". This is how far I've got ``` $identity = Get-OrganizationConfig | Select identity Export-Csv \$identity.csv ``` The file name is `"@{Identity=COMPANY.onmicrosoft.com}.csv`. How do I select the start of the tenant name?<issue_comment>username_1: Try this ``` $identity = (Get-OrganizationConfig | Select identity).identity Export-Csv \$identity.csv ``` Upvotes: 0 <issue_comment>username_2: The first issue is that `Select-Object Identity` creates a new object with an `Identity` property. If you just want the value of the `Identity` property, use `Select-Object -ExpandProperty`: ``` $identity = Get-OrganizationConfig | Select -ExpandProperty Identity ``` To remove the last part of the tenant FQDN, you could use the `-replace` regex operator: ``` $identity = $identity -replace '\.onmicrosoft\.com$' ``` or the `String.Replace()` method: ``` $identity = $identity.Replace('.onmicrosoft.com') ``` If you want to capitalize the first letter in the tenant name, you could use `CultureInfo.TextInfo.ToTitleCase()`: ``` $identity = (Get-Culture).TextInfo.ToTitleCase($identity) ``` Upvotes: 2 [selected_answer]
2018/03/14
857
2,959
<issue_start>username_0: **Update** This is happening because of hot-reloading comes with Creact React App. Related issues: <https://github.com/expressjs/multer/issues/566> <https://github.com/facebook/create-react-app/issues/4095> --- I am trying to learn file upload with Nodejs, Express, Multer and React for frontend. I achieved to upload files. There is a problem I struggle, not always but most of the time the whole app refreshes after upload. Here is the relevant code. **My simple form** ``` Submit ``` **handleFileChange and handleFormSubmit** ``` handleFormSubmit = () => { const formData = new FormData(); formData.append( "file", this.state.file ); axios.post( "/api/upload", formData ); } handleFileChange = ( e ) => { this.setState( { file: e.target.files[ 0 ] } ); } ``` **Related express route code** ``` const express = require( "express" ); const multer = require( "multer" ); const storage = multer.diskStorage( { destination( req, file, cb ) { cb( null, "client/public/images" ); }, filename( req, file, cb ) { cb( null, `${ Date.now() }-${ file.originalname }` ); }, } ); const upload = multer( { storage } ); router.post( "/upload", upload.single( "file" ), ( req, res ) => { res.send(); } ); ``` I searched a little bit but not luck. I've seen [this post](https://stackoverflow.com/questions/47164372/prevent-redirect-on-file-upload-through-form-submittal-react-express-multer). Before seeing this I had already tried event.preventDefault(). Also, I've tried many things like uploading directly with onChange() without setting a state then handling it with onSubmit(). Before simplifying the code (like posting directly in handleFormSubmit) I was trying to do this via Redux actions but for debugging purposes I moved it here.<issue_comment>username_1: your are using ``` Submit ``` this is why refreshing so do it like this ``` Submit ``` Upvotes: 0 <issue_comment>username_2: It is the first example [here](https://reactjs.org/docs/forms.html#controlled-components). ``` handleFormSubmit = async (e) => { e.preventDefault() // <-- missing this const formData = new FormData(); formData.append( "file", this.state.file ); const response = await axios.post( "/api/upload", formData ); } handleFileChange = ( e ) => { this.setState( { file: e.target.files[ 0 ] } ); } ``` Upvotes: 1 <issue_comment>username_3: remove this from multer ``` destination( req, file, cb ) { cb( null, "client/public/images" ); }, ``` Upvotes: 0 <issue_comment>username_4: > > This is happening because of hot-reloading comes with Creact React App. > > > I ran into this issue too, but using plain `webpack` without CRA. I was uploading the files to a static dir served by `webpack-dev-server`. I fixed it by setting `devServer.static.watch` to `false` ([webpack docs](https://webpack.js.org/configuration/dev-server/#watch)). Upvotes: 1
2018/03/14
1,500
5,233
<issue_start>username_0: I have started with asp.net core 2 web app and I can publish it to App Service from Visual Studio using web deploy. I've created new clean .net core 2 console app. I'm able to upload it as webjob and run using Azure Portal, but **how do I publish it from local command line or Visual Studio?** Basically, I don't care whether it will be published alongside the Web Application or as standalone. EDIT: I've somehow managed to get the publish dialog by right clicking the project and selecting Publish (not Publish as Azure WebJob) as menioned in the docs. But I still don't know what did the trick. Installing Azure SDK? Adding webjob-publish-settings.json? Adding Setting.job?<issue_comment>username_1: There is a great articel about [Develop and deploy WebJobs using Visual Studio - Azure App Service](https://learn.microsoft.com/en-us/azure/app-service/websites-dotnet-deploy-webjobs) that covers your question. Basically after installing the prerequisites (depending on your VS version) you can Right-click the Console Application project in the Solution Explorer, and then click **Publish as Azure WebJob**. Upvotes: 0 <issue_comment>username_2: **Publish .net core as webjob with Azure portal**: As you know: A WebJob looks for specific file type, for example (.cmd, .bat, .exe, etc…) To run a .NET Core console application you use the DOTNET command Therefore, you need to create a file with an extension which is WebJob looking for that executes. 1.You could create a .net core conosole application. After running it, you will have the follow file in your projectname/bin/Debug/netcoreapp2.0 [![enter image description here](https://i.stack.imgur.com/emCjC.png)](https://i.stack.imgur.com/emCjC.png) 2.Create a run.cmd file under it. And the run.cmd content is as below: ``` @echo off dotnet ConsoleApp7.dll ``` 3.To deploy the .NET Core console application to an Azure App Service Web App Web Job access the Azure portal and navigate to the Azure App Service where you will host the WebJob. Click on the WebJobs link and the Add button. [![enter image description here](https://i.stack.imgur.com/C8TLm.png)](https://i.stack.imgur.com/C8TLm.png) 4.Upload the netcoreapp2.0.zip [![enter image description here](https://i.stack.imgur.com/zvM9h.png)](https://i.stack.imgur.com/zvM9h.png) 5.Once the WebJob is successfuly uploaded, it will render in the WebJob blade. Click on it and you will see the Run button. [![enter image description here](https://i.stack.imgur.com/zUutV.png)](https://i.stack.imgur.com/zUutV.png) 6.When you write output to the console using the WriteLine() method, it will show in the Run Details window on KUDU/SCM. [![enter image description here](https://i.stack.imgur.com/YBie6.png)](https://i.stack.imgur.com/YBie6.png) For more detail, you could refer to this [article](https://blogs.msdn.microsoft.com/benjaminperkins/2017/03/07/how-to-deploy-a-net-core-console-application-to-azure-webjob/) and this [one](https://blog.kloud.com.au/2016/06/08/azure-webjobs-with-dotnet-core-rc2/). **Update:(publish with command line)** 1.First, download your publish settings file of your webapp from Azure Portal. [![enter image description here](https://i.stack.imgur.com/wbBRV.png)](https://i.stack.imgur.com/wbBRV.png) 2.Prepare the .zip folder you have created. As David said, you could use WAWSDeploy to publish webjob with command line. You could download WAWSDeploy with this [link](https://github.com/davidebbo/WAWSDeploy). 3.Then go to WAWSDeploy/bin/Debug folder to open the local command line. Try the following command to deploy the webjob: ``` WAWSDeploy.exe DotNetCoreWebJobSample.zip [WEBSITE_NAME].PublishSettings /t app_data\jobs\triggered\DotNetCoreWebJobSample /v ``` Target directory will be `app_data\jobs\triggered\[WEBJOB_NAME]`. If this web job is a continuously running one, replace triggered with continuous. **Note**:you could put the WAWSDeploy.exe and publish settings file and the .zip into a folder. If not, you should **give the full path of publish settings and .zip file**. So that you could publish webjob successfully. For more detail about WAWSDeploy, refer to this [article](http://blog.davidebbo.com/2014/03/WAWSDeploy.html). Upvotes: 3 <issue_comment>username_3: Make sure your csproj includes correct SDK's: ``` ``` Then just right click on the project in Visual Studio and click publish, select Microsoft Azure App Service and you should see the WebJob publish options: [![enter image description here](https://i.stack.imgur.com/6zcXT.png)](https://i.stack.imgur.com/6zcXT.png) Also notice that you should use `Microsoft.NET.Sdk` and not Microsoft.NET.Sdk.Web If you are using Microsoft.NET.Sdk.Web, Visual Studio assumes that you are deploying to WebSite and not WebJob. The publish dialogs are slightly different for WebSite and WebJob. For example, for WebJob project you can specify WebJob Name. You might be interested in: * [github/aspnet/websdk/issue: WebJob publishing for Microsoft.NET.Sdk.Web](https://github.com/aspnet/websdk/issues/331) * [github/aspnet/Mvc/issue: How to publish console app as a WebJob rather than Web App](https://github.com/aspnet/Mvc/issues/7677) Upvotes: 3 [selected_answer]
2018/03/14
869
3,126
<issue_start>username_0: I have a Symfony 3.3 contact form that sends an email. Now I am trying to add an attachment to the form. I insert the following line in my sendEmail function: ``` ->attach($data["attachment"]) ``` ... and I get the following error: > > Argument 1 passed to Swift\_Mime\_SimpleMessage::attach() must implement > interface Swift\_Mime\_MimeEntity, instance of > Symfony\Component\HttpFoundation\File\UploadedFile given > > > So my question is: **How do I convert my UploadedFile object into something that SwiftMailer will be happy with?** ==== Edit #1: I tried this with no success: ``` $fullFilePath = $data["attachment"]->getPath() . '/' . $data["attachment"]->getClientOriginalName(); $attachment = \Swift_Attachment::fromPath($fullFilePath); ``` Attaching that "attachment" just resulted in the email not being sent, though the application acted as if it had sent the form. ==== Edit #2: Progress! I'm now able to get a useful error. This code ... ``` $extension = $data["attachment"]->guessExtension(); if($extension !== 'rtf'){ die('Please give us an rtf file. TODO: Put a better message here!'); } $newFilePath = '/tmp'; $newFileName = 'temporary.rtf'; $data["attachment"]->move($newFilePath, $newFileName); ``` ... gives me an error like this: > > Could not move the file "/tmp/phpnIqXDr" to "/tmp/temporary.rtf" () > > > ... which is very frustrating, since I know that `/tmp` is writeable by every user.<issue_comment>username_1: Here is the code that ended up working for me: ``` private function sendEmail($data) { $vgmsContactMail = self::contactMail; $mailer = $this->get('mailer'); /* @var $uploadedFile UploadedFile */ $uploadedFile = $data["attachment"]; $extension = $uploadedFile->guessExtension(); if(!in_array($extension, ['pdf','rtf']) ){ die('Please upload a .pdf or .rtf file.'); } $newFilePath = '/tmp'; $newFileName = 'temporary' . rand(0,10000) . '.rtf'; $uploadedFile->move($newFilePath, $newFileName); $attachment = \Swift_Attachment::fromPath('/tmp/' . $newFileName); $message = \Swift_Message::newInstance("VGMS Contact Form: ". $data["subject"]) ->setFrom(array($vgmsContactMail => "Message by ".$data["name"])) ->setTo(array( $vgmsContactMail => $vgmsContactMail )) ->setBody($data["message"]." ContactMail :".$data["email"]) ->attach($attachment) ; return $mailer->send($message); } ``` Upvotes: 0 <issue_comment>username_2: You don't need to move the file, Symfony\Component\HttpFoundation\File\UploadedFile class returns the path and has methods to get the filename and mimetype. This code works for me: ``` $message->attach( \Swift_Attachment::fromPath($data["attachment"]) ->setFilename( $data["attachment"]->getClientOriginalName() ) ->setContentType( $data["attachment"]->getClientMimeType() ) ); ``` Credit to [toolpixx](https://gist.github.com/toolpixx/1b4b02d687aeb9993be9) Upvotes: 3 [selected_answer]
2018/03/14
818
2,798
<issue_start>username_0: There are around 9000 users in users\_table and the user with the highest order count has around 300 orders. When I query with the code below everything works just fine until the `$match` stage. It takes more than 20 secs so the query is resulted with timeout error. I know I can increase the time to get rid of that error but I want to know WHY a simple `$match` stage takes very long time? ``` db.getCollection('users_table').aggregate([ { // user.id field is related to order.user_id field '$lookup': { 'from': 'orders_table', 'localField': 'id', 'foreignField': 'user_id', 'as': 'orders' } }, { // I keep user.id and the delivered orders '$project': { 'id': 1, 'filtered_orders': { '$filter': { 'input': '$orders', 'as': 'order', 'cond': {'$eq':['$$order.status', 'delivered']} } } } }, { // get rid of _id and get delivered order count as well as user.id '$project': { '_id': 0, 'id': 1, 'order_count': {'$size': '$filtered_orders'} } }, { // get order_count field between 200 and 350 '$match': { 'order_count' : {'$gte': 200, '$lte': 350} } } ]) ```<issue_comment>username_1: Here is the code that ended up working for me: ``` private function sendEmail($data) { $vgmsContactMail = self::contactMail; $mailer = $this->get('mailer'); /* @var $uploadedFile UploadedFile */ $uploadedFile = $data["attachment"]; $extension = $uploadedFile->guessExtension(); if(!in_array($extension, ['pdf','rtf']) ){ die('Please upload a .pdf or .rtf file.'); } $newFilePath = '/tmp'; $newFileName = 'temporary' . rand(0,10000) . '.rtf'; $uploadedFile->move($newFilePath, $newFileName); $attachment = \Swift_Attachment::fromPath('/tmp/' . $newFileName); $message = \Swift_Message::newInstance("VGMS Contact Form: ". $data["subject"]) ->setFrom(array($vgmsContactMail => "Message by ".$data["name"])) ->setTo(array( $vgmsContactMail => $vgmsContactMail )) ->setBody($data["message"]." ContactMail :".$data["email"]) ->attach($attachment) ; return $mailer->send($message); } ``` Upvotes: 0 <issue_comment>username_2: You don't need to move the file, Symfony\Component\HttpFoundation\File\UploadedFile class returns the path and has methods to get the filename and mimetype. This code works for me: ``` $message->attach( \Swift_Attachment::fromPath($data["attachment"]) ->setFilename( $data["attachment"]->getClientOriginalName() ) ->setContentType( $data["attachment"]->getClientMimeType() ) ); ``` Credit to [toolpixx](https://gist.github.com/toolpixx/1b4b02d687aeb9993be9) Upvotes: 3 [selected_answer]
2018/03/14
730
2,687
<issue_start>username_0: In the code below, I have a defined script with an ID so I may access it from the DOM. As you can see, I request the defined script in the DOM during the definition of the script. How does this not recursively call its self when requesting the innerhtml call? My guess is that the DOM parses the defined script and placed the the ID into the DOM. During execution in the browser the JS gets the element by the ID and loads the HTML from the script. I feel like a recursive call would be constructed because during the execution on the browser side, we need to get the HTML in the innerHTML method, which will execute the JS script again? Question: Is my assumption correct or am I missing something about the DOM, if so what is it that I do not know? ```html var x = 1; var y = 2; if (x != y) { var html\_code = document.getElementById("myScript").innerHTML; //More JavaScript Below here console.log(html\_code); } ```<issue_comment>username_1: Your assumption, that a request to `.innerHTML` will rerun the javascript is wrong. It will just return the text of the element. Upvotes: 2 <issue_comment>username_2: That code returns the current **HTML** of an element. Who executes the rendering/parsing/compiling process is the browser itself or whatever middleware who parses/renders/compiles the HTML code, scripts, Etc. ```html var x = 1; var y = 2; if (x != y) { var html\_code = document.getusername_2mentById("myScript").innerHTML; console.log(html\_code) } ``` username_2ment.innerHTML ================= > > The username_2ment property `innerHTML` property is used to get or set a **string** representing serialized HTML describing the element's descendants. > > > So, that attribute won't execute any compiling process or script. Upvotes: 0 <issue_comment>username_3: When you store the element's `innerHTML` into the variable `html_code`, the text that is inside the `</code> tag -- that is, the actual script <em>itself</em> -- is returned, and it is displayed as text in the console. But just displaying that code won't actually execute it.</p> <p>Now, if you were to use <code>eval()</code> on your <code>html\_code</code> variable, <em>that</em> would execute the code. See the below example:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code><script id="myScript"> var x = 1; var y = 2; if (x != y) { var html\_code = document.getElementById("myScript2").innerHTML; //More JavaScript Below here eval(html\_code); } alert("test");` Upvotes: 1
2018/03/14
1,545
5,928
<issue_start>username_0: I have developed an android application using OpenCV. The user interface has a bottom navigation view. When I pressed items in the Bottom Navigation View it applies different filters to the JavaCameraView in real time. My problem is that the bottom navigation looks flat. I want to make the items like buttons, elevated from their positions, so they are not in the same plane as the rest of the User Interface. I am aware of the Bottom sheet, but that won't help me. I have already fully developed the app and finalized the design. I can't use any View other than Bottom Navigation View. Is there any XML attribute or any method I can use on the BottomNavigationView object to make the Items elevated from their positions? The navigation view: ```xml ``` The item code: ```xml xml version="1.0" encoding="utf-8"? ```<issue_comment>username_1: Using Customized class we can Draw Curved custom shapes in Curved Bottom Navigation View. By using Below XML ``` ``` For Java class is ``` public class CurvedBottomNavigationView extends BottomNavigationView { private Path mPath; private Paint mPaint; /** the CURVE_CIRCLE_RADIUS represent the radius of the fab button */ public final int CURVE_CIRCLE_RADIUS = 256 / 3; // the coordinates of the first curve public Point mFirstCurveStartPoint = new Point(); public Point mFirstCurveEndPoint = new Point(); public Point mFirstCurveControlPoint2 = new Point(); public Point mFirstCurveControlPoint1 = new Point(); //the coordinates of the second curve @SuppressWarnings("FieldCanBeLocal") public Point mSecondCurveStartPoint = new Point(); public Point mSecondCurveEndPoint = new Point(); public Point mSecondCurveControlPoint1 = new Point(); public Point mSecondCurveControlPoint2 = new Point(); public int mNavigationBarWidth; public int mNavigationBarHeight; public CurvedBottomNavigationView(Context context) { super(context); init(); } public CurvedBottomNavigationView(Context context, AttributeSet attrs) { super(context, attrs); init(); } public CurvedBottomNavigationView(Context context, AttributeSet attrs, int defStyleAttr) { super(context, attrs, defStyleAttr); init(); } private void init() { mPath = new Path(); mPaint = new Paint(); mPaint.setStyle(Paint.Style.FILL_AND_STROKE); mPaint.setColor(Color.WHITE); setBackgroundColor(Color.TRANSPARENT); } @Override protected void onLayout(boolean changed, int left, int top, int right, int bottom) { super.onLayout(changed, left, top, right, bottom); } @Override protected void onSizeChanged(int w, int h, int oldw, int oldh) { super.onSizeChanged(w, h, oldw, oldh); // get width and height of navigation bar // Navigation bar bounds (width & height) mNavigationBarWidth = getWidth(); mNavigationBarHeight = getHeight(); // the coordinates (x,y) of the start point before curve mFirstCurveStartPoint.set((mNavigationBarWidth / 2) - (CURVE_CIRCLE_RADIUS * 2) - (CURVE_CIRCLE_RADIUS / 3), 0); // the coordinates (x,y) of the end point after curve mFirstCurveEndPoint.set(mNavigationBarWidth / 2, CURVE_CIRCLE_RADIUS + (CURVE_CIRCLE_RADIUS / 4)); // same thing for the second curve mSecondCurveStartPoint = mFirstCurveEndPoint; mSecondCurveEndPoint.set((mNavigationBarWidth / 2) + (CURVE_CIRCLE_RADIUS * 2) + (CURVE_CIRCLE_RADIUS / 3), 0); // the coordinates (x,y) of the 1st control point on a cubic curve mFirstCurveControlPoint1.set(mFirstCurveStartPoint.x + CURVE_CIRCLE_RADIUS + (CURVE_CIRCLE_RADIUS / 4), mFirstCurveStartPoint.y); // the coordinates (x,y) of the 2nd control point on a cubic curve mFirstCurveControlPoint2.set(mFirstCurveEndPoint.x - (CURVE_CIRCLE_RADIUS * 2) + CURVE_CIRCLE_RADIUS, mFirstCurveEndPoint.y); mSecondCurveControlPoint1.set(mSecondCurveStartPoint.x + (CURVE_CIRCLE_RADIUS * 2) - CURVE_CIRCLE_RADIUS, mSecondCurveStartPoint.y); mSecondCurveControlPoint2.set(mSecondCurveEndPoint.x - (CURVE_CIRCLE_RADIUS + (CURVE_CIRCLE_RADIUS / 4)), mSecondCurveEndPoint.y); } @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); mPath.reset(); mPath.moveTo(0, 0); mPath.lineTo(mFirstCurveStartPoint.x, mFirstCurveStartPoint.y); mPath.cubicTo(mFirstCurveControlPoint1.x, mFirstCurveControlPoint1.y, mFirstCurveControlPoint2.x, mFirstCurveControlPoint2.y, mFirstCurveEndPoint.x, mFirstCurveEndPoint.y); mPath.cubicTo(mSecondCurveControlPoint1.x, mSecondCurveControlPoint1.y, mSecondCurveControlPoint2.x, mSecondCurveControlPoint2.y, mSecondCurveEndPoint.x, mSecondCurveEndPoint.y); mPath.lineTo(mNavigationBarWidth, 0); mPath.lineTo(mNavigationBarWidth, mNavigationBarHeight); mPath.lineTo(0, mNavigationBarHeight); mPath.close(); canvas.drawPath(mPath, mPaint); } } ``` You can Download complete Source Code from [Here](https://www.androidtutorialonline.com/curved-bottom-navigation-view/) [![enter image description here](https://i.stack.imgur.com/qHh9w.png)](https://i.stack.imgur.com/qHh9w.png) Upvotes: 1 <issue_comment>username_2: As elevation is not supported by menu items, You can design your icons and provide some highlights and shadows as mentioned in android UI guideline so they look little elevated. Refer this andorid guide on how to design menu icons. <https://developer.android.com/guide/practices/ui_guidelines/icon_design_menu> Check this image for understanding. [![enter image description here](https://i.stack.imgur.com/Xwi8J.png)](https://i.stack.imgur.com/Xwi8J.png) Upvotes: 0
2018/03/14
1,469
5,631
<issue_start>username_0: I'm writing recycler view which contains editable list of words. ViewHolder for RecyclerView contains 2 objects: editText and sound image icon. My idea is that when I push sound icon, I expect to hear the pronunciation of the word, which I'm realizing with the help of SDK's TextToSpeech class. To reduce amount of code I've created the follow class; ``` public class SpeechController { private String pronounce; private Context context; public TextToSpeech tts = new TextToSpeech(context, new TextToSpeech.OnInitListener() { @Override public void onInit(int status) { if (status != TextToSpeech.ERROR) { tts.setLanguage(Locale.UK); } } }); public SpeechController(Context context, String pronounce) { this.context = context; this.pronounce = pronounce; } public void speakOut() { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) { tts.speak(pronounce, TextToSpeech.QUEUE_FLUSH, null, null); tts.stop(); tts.shutdown(); } else { tts.speak(pronounce, TextToSpeech.QUEUE_FLUSH, null); tts.stop(); tts.shutdown(); } } } ``` Then I create instance of this class in my adapter's onBindViewHolder method for recyclerView like that: ``` @Override public void onBindViewHolder(final ViewHolder holder, int position) { if (mGroupsVc != null) { GroupVc current = mGroupsVc.get(position); holder.nameView.setText(current.getNameGroup()); holder.imageView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { SpeechController mSpeechController = new SpeechController(holder.nameView.getContext(), holder.nameView.getText().toString()); mSpeechController.speakOut(); } }); } else { // Covers the case of data not being ready yet. holder.nameView.setText("No Word"); } } ``` My application is get compiled but when I'm trying to click sound button NullPointerException appears and gives reference to these operators in both classes: ``` SpeechController mSpeechController = new SpeechController(holder.nameView.getContext(), holder.nameView.getText().toString()); public TextToSpeech tts = new TextToSpeech(context, ``` I know the meaning of this Exception but I don't where I'm getting wrong with the initialization of object. I need your help to define it. Below I affix the complete error-logcat ``` 03-14 17:48:52.168 21719-21719/com.hfad.singleton E/AndroidRuntime: FATAL EXCEPTION: main java.lang.NullPointerException at android.speech.tts.TextToSpeech.(TextToSpeech.java:606) at android.speech.tts.TextToSpeech.(TextToSpeech.java:582) at android.speech.tts.TextToSpeech.(TextToSpeech.java:567) at com.hfad.singleton.groupsActivityController.SpeechController.(SpeechController.java:14) at com.hfad.singleton.adapter.GroupsVcAdapter$1.onClick(GroupsVcAdapter.java:52) at android.view.View.performClick(View.java:4421) at android.view.View$PerformClick.run(View.java:17903) at android.os.Handler.handleCallback(Handler.java:730) at android.os.Handler.dispatchMessage(Handler.java:92) at android.os.Looper.loop(Looper.java:213) at android.app.ActivityThread.main(ActivityThread.java:5225) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:525) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:741) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:557) at dalvik.system.NativeStart.main(Native Method) ```<issue_comment>username_1: Initialize `TTS` after context is set . ``` public TextToSpeech tts ; private Context context; public SpeechController(Context context, String pronounce) { this.context = context; this.pronounce = pronounce; initTTS(); } private void initTTS(){ tts = new TextToSpeech(context, new TextToSpeech.OnInitListener() { @Override public void onInit(int status) { if (status != TextToSpeech.ERROR) { tts.setLanguage(Locale.UK); } } }); } ``` Upvotes: 1 <issue_comment>username_2: After initialising the SpeechController object in the adapter: ``` SpeechController mSpeechController = new SpeechController(holder.nameView.getContext(), holder.nameView.getText().toString()); mSpeechController.speakOut(); ``` You call its method `speakout()` which tries to access the variable `tts`: ``` tts.speak() ``` This object has not been initialised in the adapter's current scope, so throws an NPE. So what you **could** do is initialise the `tts` object in the constructor of the SpeechController object: ``` private String pronounce; private Context context; public TextToSpeech tts; public SpeechController(Context context, String pronounce) { this.context = context; this.pronounce = pronounce; tts = new TextToSpeech(context, new TextToSpeech.OnInitListener() { @Override public void onInit(int status) { if (status != TextToSpeech.ERROR) { tts.setLanguage(Locale.UK); } } }); } ``` This will fix the NPE crash as `tts` is no longer null in the current context. Upvotes: 1 [selected_answer]
2018/03/14
564
2,128
<issue_start>username_0: In MongoDB v3.4, [views](https://docs.mongodb.com/manual/core/views/) were added as a feature. However, I haven't been able to find any resources for how'd I'd use a view I created in a Node.js application. How would I go about doing this, specifically for an aggregation view?<issue_comment>username_1: I also found this to be unclear. Confusingly, you need to use [db.createCollection](http://mongodb.github.io/node-mongodb-native/3.0/api/Db.html#createCollection), unlike the `createView` command in the MongoDB shell. For example: ```js db.createCollection('myView', { viewOn: 'myCollection', pipeline: [], }); ``` Where `pipeline` is an [Aggregation Pipeline](https://docs.mongodb.com/manual/reference/operator/aggregation/). You can then access your View in the same way as a collection: ```js db.collection('myView').find(); ``` Upvotes: 3 <issue_comment>username_2: I managed to do it as @username_1 decribed. Here's the complete code ``` const MongoClient = require('mongodb').MongoClient; const assert = require('assert'); import DB_CONFIG from '../../server/constants/database' /* * Requires the MongoDB Node.js Driver * https://mongodb.github.io/node-mongodb-native */ const agg = [ { '$group': { '_id': '$entityGroupId', 'totalVisits': { '$sum': '$visits' } } }, { '$sort': { 'totalVisits': -1 } }, { '$lookup': { 'from': 'entities', 'localField': '_id', 'foreignField': 'entityGroupId', 'as': 'entityGroup' } } ]; MongoClient.connect( DB_CONFIG.URL, {useNewUrlParser: true, useUnifiedTopology: true}, async function (connectErr, client) { assert.equal(null, connectErr); const db = client.db('weally'); // db.createView("entityGroupByVisits", 'complaintvisitcounts', agg) await db.createCollection('entityGroupByVisits', { viewOn: 'complaintvisitcounts', pipeline: agg, }); client.close(); }); ``` Upvotes: 1
2018/03/14
1,187
5,034
<issue_start>username_0: I have an iOS and Android app that supports +5 languages, so I thought it would be great if I could put the localized string files on a server and once the user open the app, I download the file and load it in the app. The challenge so far was in iOS, since I need to reference all labels, buttons .... etc from my XIB and Storyboard files to be able to assign the corresponding text/placeholder to all of the UI elements. Any approach/technology/idea about doing like one file on a server that contain the localization for both platforms and to be easy to implement since the app has reached 50+ screens.<issue_comment>username_1: I'm neither an Android nor iOS developer, but I have experience doing the file conversion part of this. ### Approach / technology / idea You're no doubt looking for a product to do the heavy lifting, but I'm going to try and answer your question as though you're rolling your own. It's not hard to write a file converter from iOS to Android strings (or vice versa) but the real challenge is how to manage the workflow and how to keep a common base of feature support that works on both platforms. Assuming your process begins by extracting translatable strings from your iOS code base, let's start with the translatable file formats you have available to you. Xcode supports XLIFF 1.2 which is an [ISO standard](https://www.iso.org/standard/71490.html) and is used by various translation products and translation industry professionals. This would be a sensible "source" format to use in your workflow. i.e. export your strings to XLIFF files and get them translated. You could then do lots of things with these files quite easily. Namely importing back into Xcode and converting to Android XML files. ### Conversion From your translated XLIFF files you could generate a set of Android strings.xml files. Writing a converter is not particularly hard, but it depends what features you need. In the simplest instance you would pull a set of key/value pairs from XLIFF and render them out as Android strings. The following (simplified) XLIFF in the Xcode style: ``` Foo ``` is equivalent to the following Android strings.xml ``` Foo ``` I'm not including any code here. If your preferred language supports XML parsing, it's not much of a leap to extract `{"foo-message": "Foo"}` and render it out. Note that your translated XLIFF files will look a little different. You'll be pulling out the elements instead of the . ### Type mapping As already mentioned in the comments, you would need to convert your formatted strings such that `"%@"` becomes `"%s"` and so on. This can be done with RegExp, but again the complexity depends on the range of what you need to support. For example, can you just stick to string/object placeholders or are you using fancy things like floating point precision, or custom padding characters? Avoiding them if you can will make conversion easier. Translatable strings tend not to use the full power of printf syntax, so you may be able to get away with a basic RegExp replacement that just looks for `"%@"` symbols. One thing to beware of is that multiple arguments must be numbered in Android. (`"%@ %@"` would have to become `"%1$s %2$s"`). You could avoid that headache by always using the argument number in the source format. (I believe `"%1$@ %2$@"` is legal in Objective C, but I'm not an iOS dev). Watch out for false positives like `"20% off"` (this is actually a space-padded octal `"% o"`). In Android you could flag the XML string element as `formatted="false"` but not sure how you'd mark that in iOS. ### Plural forms This the trickiest problem I've come across. Although both formats support plural forms, iOS is a bit of a nightmare. Android is neat: ``` 1 apple %d apples ``` But XLIFF has no *native* support for plural variations, so the equivalent in Xcode flavoured XLIFF is much more complex. Recent support added to Xcode uses separate translation units for each "quantity". These live within a block that maps to a [.stringsdict file](https://developer.apple.com/library/content/documentation/MacOSX/Conceptual/BPInternational/StringsdictFileFormat/StringsdictFileFormat.html). I'm not going to get into it here. ### Cloud hosting I can't advise on how best to dynamically load translations from files stored remotely, but I would consider whether you want your app to be unavailable when offline, and perhaps more importantly whether you want to introduce an external dependency. I can see the advantage of being able to update translations without shipping a new release of your app, but Xcode's import function seems to take care of a lot of the hard work, particularly when it comes to plural forms. It also saves you the hassle of merging new source strings into your translated files. To me it makes sense to compile them in, but then I've never experienced Apple's approval process :) Upvotes: 1 <issue_comment>username_2: You can use Applanga for that type of work. Upvotes: 0
2018/03/14
462
1,871
<issue_start>username_0: I can't seem to understand why my `fullName` variable returns `undefined` in console. It is returning the text from the input in the alert box in the click function. But outside the click function, `fullName` returns `undefined`. Here is the code: ``` var fullName, firstName, middleName, lastName, middleInitials; $(document).ready(function() { $("#nameButton").click(function(event){ // upon clicking the event file fullName = $("#name").val().toString(); // that's a jquery function alert("fullname" + fullName + ", typeof = " + typeof fullName); // 20:44 shows correct name console.log(typeof fullName); // does not display in console console.log(fullName); // does not display in console }); console.log("fullName = " + fullName); // 20:44 Shows undefined }); // the jquery one ```<issue_comment>username_1: When `$(document).ready(function() {` executes the `fullname` variable is `undefined`. When you click the button it will set the variable but by then `$(document).ready(function() {` has already executed...which means `console.log("fullName = " + fullName);` has already executed. Upvotes: 2 <issue_comment>username_2: If the element is loaded when the document is ready, you should define the variable before the click happens if you want to see its content. ``` $(document).ready(function() { // define the full name here. fullName = $("#name").val().toString(); // that's a jquery function $("#nameButton").click(function(event){ // upon clicking the event file alert("fullname" + fullName + ", typeof = " + typeof fullName); // 20:44 shows correct name console.log(typeof fullName); // does not display in console console.log(fullName); // does not display in console }); console.log("fullName = " + fullName); // This should work now. }); ``` Upvotes: 1
2018/03/14
850
2,944
<issue_start>username_0: This is PDP-11 code mixing C and assembly. In the below, u.u\_rsav is an array pointer, `savu(u.u_rsav);` The assembly code for this function is ``` _savu: bis $340,PS mov (sp)+,r1 mov (sp),r0 mov sp,(r0)+ mov r5,(r0)+ bic $340,PS jmp (r1) ``` It seems like before it enters the procedure, it first pushes the argument, then pushes the return point PC value. So, r1 stores PC and r0 stores the argument. My puzzle is sp (stack pointer) does not restore to the original value before the assembly code jumps back to the calling point. It still points to the position where the argument was stored in the stack.<issue_comment>username_1: It is not C, but the [Application Binary Interface](https://en.wikipedia.org/wiki/Application_binary_interface) conventions (you often have *several* language implementations or compilers following the *same* ABI, and you did had in the past various compilers with *different* ABI conventions on the same system). And it is architecture and operating system specific. BTW, [calling conventions](https://en.wikipedia.org/wiki/Calling_convention) are part of ABIs. For examples, see those [related to x86](https://github.com/hjl-tools/x86-psABI/wiki/X86-psABI) It is up to you to find the ABI conventions for the archeological PDP11 computer (and compiler, and OS), e.g. see [PDP11 FAQ](http://ftp.dbit.com/pub/pdp11/faq/faq.html) and [C calling conventions](https://www.bell-labs.com/usr/dmr/www/clcs.html). Some ABIs used the stack, with various caller safe / callee safe conventions on registers. > > My puzzle is sp (stack pointer) does not restore to the original value before the assembly code jumps back to the calling point. > > > Some ABIs or calling conventions require the stack pointer to be restored by the called function. Others want the calling function to do that. Upvotes: -1 <issue_comment>username_2: In C, especially K&R C as is likely to be used by any PDP-11 compiler, the called function cannot know how many arguments the calling function placed on the stack. This is how var args functions used to work. For example, `printf` would be declared in `stdio.h` like this: ``` int printf(); ``` And the definition would start like this: ``` int printf(fmt) char *fmt; { /* function body */ } ``` And the caller could then just do (for example) ``` printf("%d %d\n", a, b); ``` Thus, it has to be the responsibility of the *calling function* to remove the arguments from the stack, not the called function. To make things clearer and that it is not just variadic functions, in K&R C, the following was perfectly legal and would print 3. ``` int add(); int main() { int sum; sum = add(1, 2, 3, 4); printf("%d\n", sum); return 0; } int add(a, b) int a; int b; { return a + b; } ``` Upvotes: 3
2018/03/14
1,101
2,932
<issue_start>username_0: all. I have a sorted time-series `my_ts`, and I need to find the pair-wise diffs (below some threshold called `horizon`) between **all** the elements of the series (and not just between consecutive elements). I wrote the following code to do that, but as you can see, it applies *itertools*, which feels unnecessary within the pandas environment. ``` from itertools import combinations my_ts = pd.Series(pd.date_range('1/1/2018', periods=6, freq='d')) def count_gaps(ts, horizon): # returns counts of all gaps shorter than horizon diffs = ((t2-t1) for (t1, t2) in combinations(ts, 2) if t2-t1<=horizon) return pd.Series(diffs).value_counts() count_gaps(my_ts, horizon=pd.to_timedelta(3, unit='d')) ``` Any suggestions for a more Pandaistic (and hopefully faster) solution?<issue_comment>username_1: I think you can ``` s=pd.DataFrame(columns=my_ts,index=my_ts).apply(lambda x : x.name-x.index) s.mask((spd.Timedelta('3 days'))).stack().value\_counts() Out[528]: 1 days 5 2 days 4 3 days 3 dtype: int64 ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Here is a sensibly faster (about x100 in a quick benchmark below) NumPy solution: ``` import numpy as np import pandas as pd def count_gaps1(ts, horizon): diffs = ts.values[:, np.newaxis] - ts.values[np.newaxis, :] idx, c = np.unique(diffs[(diffs > np.timedelta64(0)) & (diffs <= horizon.to_timedelta64())], return_counts=True) return pd.Series(c, idx) my_ts = pd.Series(pd.date_range('1/1/2018', periods=6, freq='d')) print(count_gaps(my_ts, horizon=pd.to_timedelta(3, unit='d'))) ``` Output: ``` 1 days 5 2 days 4 3 days 3 dtype: int64 ``` IPython `%timeit` benchmark: ``` import itertools import numpy as np import pandas as pd def count_gaps_original(ts, horizon): diffs = ((t2-t1) for (t1, t2) in itertools.combinations(ts, 2) if t2-t1<=horizon) return pd.Series(diffs).value_counts() def count_gaps_Wen(ts, horizon): s = pd.DataFrame(columns=my_ts,index=my_ts).apply(lambda x : x.name-x.index) return s.mask((s < pd.Timedelta('1 days')) | (s > pd.Timedelta('3 days'))).stack().value_counts() def count_gaps_username_2(ts, horizon): diffs = ts.values[:, np.newaxis] - ts.values[np.newaxis, :] idx, c = np.unique(diffs[(diffs > np.timedelta64(0)) & (diffs <= horizon.to_timedelta64())], return_counts=True) return pd.Series(c, idx) my_ts = pd.Series(pd.date_range('1/1/2018', periods=100, freq='d')) %timeit count_gaps_original(my_ts, horizon=pd.to_timedelta(3, unit='d')) >>> 145 ms ± 1.87 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) %timeit count_gaps_Wen(my_ts, horizon=pd.to_timedelta(3, unit='d')) >>> 44.1 ms ± 942 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) %timeit count_gaps_username_2(my_ts, horizon=pd.to_timedelta(3, unit='d')) >>> 409 µs ± 9.19 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) ``` Upvotes: 0
2018/03/14
2,556
9,216
<issue_start>username_0: I'd like to use Visual Studio Code as my editor for Flutter development, but I don't know how to get the emulator going. I've installed Visual Studio Code on [Ubuntu 17.10](https://en.wikipedia.org/wiki/Ubuntu_version_history#Ubuntu_17.10_.28Artful_Aardvark.29) (<NAME>). I followed the first half of instructions as outlined on the [Flutter: Get Started](https://flutter.io/get-started/test-drive/#vscode) page (Create new app). Then I ran into trouble in the second half: > > **Run the app** > > > 1. Make sure a target device is selected in the lower, right-hand corner of VS Code > 2. Press the F5 button on the keyboard, or invoke Debug>Start Debugging > 3. Wait for the app to launch > 4. If everything works, after the app has been built, you should see your starter app on your device or simulator: > > > The problem is that in that bottom right-hand corner is "No Devices." How do I connect a device? I can't seem to find instructions anywhere.<issue_comment>username_1: You can connect an Android phone via a USB cable and then it will show the device in the bottom bar. (Please note [ADB](https://en.wikipedia.org/wiki/Android_software_development#Android_Debug_Bridge_(ADB)) must be installed. Click [here](https://www.linuxbabe.com/ubuntu/how-to-install-adb-fastboot-ubuntu-16-04-16-10-14-04) for more.) Or you can completely install Android Studio and set up the emulator from there and run the emulator. Then Visual Studio Code will recognise the emulator and show it at the bottom bar. Upvotes: 4 <issue_comment>username_2: Alternatively, if you enable developer mode and (ADB) is still needed, you can use connect to the device. To enable developer mode, you go to *Phone Settings* → *About Phone* → tap *buildnumber* seven times. Once you have it enabled and have the device connected, you can start seeing the device in Visual Studio Code. Upvotes: 2 <issue_comment>username_3: For those people using a Mac you can go to your terminal and type ``` $ open -a Simulator.app ``` and this command will open the simulator. After that, just go to the Debug option and tap on "**Start Debugging**" [![enter image description here](https://i.stack.imgur.com/F7Gji.png)](https://i.stack.imgur.com/F7Gji.png) **If you want to test with an Android Emulator:** What I did was to go first to Android Studio and open a virtual Device with AVD Manager. After that you'll see another devices in your Visual Studio Code [![enter image description here](https://i.stack.imgur.com/P7rzT.png)](https://i.stack.imgur.com/P7rzT.png) In the bottom right you'll see now that you have 2 devices connected. Now, you can test with any of this devices. Upvotes: 4 <issue_comment>username_4: To select a device you must first start both, Android Studio and your virtual device. Then Visual Studio Code will display that virtual device as an option. Upvotes: 2 <issue_comment>username_5: First, you have to install Android Studio and Xcode to create a phone emulator. In Visual Studio Code you can use the **[Android iOS Emulator](https://marketplace.visualstudio.com/items?itemName=DiemasMichiels.emulate)** plugin to set the path of emulator to run. Upvotes: 2 <issue_comment>username_6: **You do not need Android Studio to create or run a virtual device. Just use sdkmanager and avdmanager from the Android SDK tools.** Use the sdkmanager to download a system image of Android for x86 system. *E.g., sdkmanager "system-images;android-21;default;x86\_64"* Then create a new virtual device using avdmanager. *e.g., avdmanager create avd --name AndroidDevice01 --package "system-images;android-21;default;x86\_64"* Then run the new virtual device using emulator. If you don't have it just install it using the sdkmanager. *E.g., emulator -avd AndroidDevice01* If you restart Visual Studio Code and load your Flutter project. The new device should show up at the bottom right of the footer. Upvotes: 5 <issue_comment>username_7: From [`version 2.13.0` of Dart Code](https://dartcode.org/releases/v2-13/#v2130), emulators can be launched directly from within Visual Studio Code, but *this* feature relies on support from the Flutter tools which means it will only show emulators when using **a very recent Flutter SDK**. Flutter’s master channel already has this change, but it may take a little longer to filter through to the development and beta channels. I tested this feature, and it worked very well on Flutter version 0.5.6-pre.61 (master channel). [![Enter image description here](https://i.stack.imgur.com/Jnx7B.png)](https://i.stack.imgur.com/Jnx7B.png) Upvotes: 8 [selected_answer]<issue_comment>username_8: Visual Studio Code needs to know where Android SDK is installed on your system. On Windows, set the "ANDROID\_SDK\_ROOT" environment variable to the Android SDK root folder. Plus: Always check the "OUTPUT" and "DEBUG CONSOLE" tabs for errors and information. Upvotes: 3 <issue_comment>username_9: You do not need to create a virtual device using Android Studio. You can use your Android device running on [Android 8.1](https://en.wikipedia.org/wiki/Android_Oreo) (Oreo) or higher. All you have to do is to activate developer settings, then enable USB DEBUGGING in the developer settings. Your device will show at the bottom right side of Visual Studio Code. Without enabling the USB debugging, the device may not show. [Enter image description here](https://i.stack.imgur.com/O9pFW.png) Upvotes: 2 <issue_comment>username_10: The following steps were done: 1. installed [Genymotion](https://stackoverflow.com/tags/genymotion/info) 2. configured a device and ran it 3. in Visual Studio Code, lower right corner, the device shows Upvotes: 2 <issue_comment>username_11: You can use the '[Android iOS Emulator](https://marketplace.visualstudio.com/items?itemName=DiemasMichiels.emulate)' plugin and *add* the Android Studio emulator script to your settings in Visual Studio Code: > > **Mac**: `emulator.emulatorPath": "~/Library/Android/sdk/tools/emulator` > > > > > **Windows**: `emulator.emulatorPath": "\\Sdk\\emulator\\emulator.exe` > > > > > **Linux**: `emulator.emulatorPath": "~/Documents/SDK/tools` > > > Your Visual Studio Code settings are found here: Menu *File* → *Preferences* → *Setting* → *User Setting* → *Extensions* → *Emulator Configuration*. Open command palette, `Cmd` + `Shift` + `P` → type "Emulator" Upvotes: 3 <issue_comment>username_12: Do `Ctrl` + `Shift` + `P` Then type `Flutter:launch emulator` or run this command in your Visual Studio Code terminal `flutter emulators` then see the result if you have installed any emulator it will show you. Then to run one of them, use `flutter emulators --launch your_emulator_id` in my case `flutter emulators --launch Nexus 6 API 28` But if you haven't installed any emulator you can install one with `flutter emulators --create [--name xyz]`, and then run your project `flutter run` inside the root directory of the project. Upvotes: 5 <issue_comment>username_13: Set "ANDROID\_SDK\_ROOT" in environment variables. This solved my problem. Upvotes: 2 <issue_comment>username_14: Recently I switched from Windows 10 home to Elementary OS. Visual Studio Code didn't start from `Ctrl` + `Shift` + `P`. Launch the emulator instead of that. I just clicked the bottom in the right corner no device → Start emulator. It worked fine. Upvotes: 1 <issue_comment>username_15: Genymotion settings -> Select ADB Tab -> Select > > Use custom Android SDK tools > -> Add Android SDK Path (Ex: C:\Users\randika\AppData\Local\Android\sdk) > > > [![Genymotion Settings View](https://i.stack.imgur.com/Yq3ch.jpg)](https://i.stack.imgur.com/Yq3ch.jpg) Upvotes: 1 <issue_comment>username_16: For me, when I was running the "flutter doctor" command from the Ubuntu command line - it showed me the below error. > > [✗] Android toolchain - develop for Android devices > ✗ Unable to locate Android SDK. > > > From this error, it is obvious that "flutter doctor" was not able to find the "Android SDK" and the reason for that was **my Android SDK was downloaded in a custom location on my Ubuntu machine.** So we must need to tell "flutter doctor" about this custom Android location, using the below command, ``` flutter config --android-sdk /home/myhome/Downloads/softwares/android-sdk/ ``` You need to replace `/home/myhome/Downloads/softwares/android-sdk/` with the path to your custom location/place where the Android SDK is available. Once this is done, rerun "flutter doctor" and now it has detected the Android SDK location and hence I could run avd/emulator by typing "flutter run". Upvotes: 2 <issue_comment>username_17: Press ctrl+shift+p and choose flutter: Launch emulator. Upvotes: 2 <issue_comment>username_18: Do **Ctrl + Shift + P** Type- **Flutter:launch emulator** Upvotes: 3 <issue_comment>username_19: You can see the bottom menu in VScode, click on this button and you will able to see all the available devices. [![VScode](https://i.stack.imgur.com/8kelD.png)](https://i.stack.imgur.com/8kelD.png) Upvotes: 4 <issue_comment>username_20: Running the following command on the terminal solved the issue for me: `flutter create .` Upvotes: -1
2018/03/14
1,111
3,943
<issue_start>username_0: I have queries: 1) `SELECT * FROM c where c.id = '0060f06e-260c-4dc7-9496-4a52e1a512c0'` Request charge for it is 3 RUs. 2) `SELECT * FROM c where c.clientId = '0060f06e-260<KEY>'` Request charge for it is 3 RUs. 3) `SELECT * FROM c where c.id = '0060f06e-260c-4dc7-9496-4a52e1a512c0' OR c.clientId = '0060f06e-<KEY>'` Request charge for it is **147 RUs**. Is this normal behaviour? I don't see any reason for such huge discrepancy. UPDATE: in case 3), query returns 2 small documents (a few hundred bytes each) UPDATE2: I am testing against Azure portal.<issue_comment>username_1: > > Is this normal behaviour? I don't see any reason for such huge discrepancy. > > > If you mentioned Request Charge is RU, it is an **abnormal behaviour**. I agree with that you mentioned **it should have no such discrepancy.** I also test with azure portal on my side, but I can't reproduce it on my side. You also could test it with azure portal directly. If you still have question about Request Charge, you could create a [support request](https://learn.microsoft.com/en-us/azure/azure-supportability/how-to-create-azure-support-request). [![enter image description here](https://i.stack.imgur.com/DCiFI.png)](https://i.stack.imgur.com/DCiFI.png) **OR** [![enter image description here](https://i.stack.imgur.com/Lammo.png)](https://i.stack.imgur.com/Lammo.png) For more inforamtion about RU please refer to [Request Units in Azure Cosmos](https://learn.microsoft.com/en-us/azure/cosmos-db/request-units#request-unit-considerations). The following is the snippet from the [Request unit considerations](https://learn.microsoft.com/en-us/azure/cosmos-db/request-units#request-unit-considerations) > > When estimating the number of request units to reserve for your Azure Cosmos DB container, it is important to take the following variables into consideration: > > > * **Item size**. As size increases the units consumed to read or write the data also increases. > * **Item property count**. Assuming default indexing of all properties, the units consumed to write a document/node/entity increase as the property count increases. > * **Data consistency**. When using data consistency levels of Strong or Bounded Staleness, additional units are consumed to read items. > * Indexed properties. An index policy on each container determines which properties are indexed by default. You can reduce your request unit consumption by limiting the number of indexed properties or by enabling lazy indexing. > Document indexing. By default each item is automatically indexed. You consume fewer request units if you choose not to index some of your items. > * **Query patterns**. The complexity of a query impacts how many Request Units are consumed for an operation. The number of predicates, nature of the predicates, projections, number of UDFs, and the size of the source data set all influence the cost of query operations. > * **Script usage**. As with queries, stored procedures and triggers consume request units based on the complexity of the operations being performed. As you develop your application, inspect the request charge header to better understand how each operation is consuming request unit capacity. > > > Upvotes: 1 <issue_comment>username_2: Ok, i've got an answer from support team that might help others. Here's what they say: > > “This has to do with our current index where we cannot serve both ‘id’ > and ‘clientId’ from the index at the same time. Basically ‘id’ is > considered a primary key and is indexed differently from ‘clientId’. > If either if the two expression is specified all by itself, then it > will be served from the index. However, specifying them together will > result on picking only one of them and dropping the other.” > > > The issue should be resolved in coming months (till summer). Upvotes: 3 [selected_answer]
2018/03/14
1,117
3,660
<issue_start>username_0: I have an array, which looks like this: ``` const persons = [ { name: "Joe", animals: [ {species: "dog", name: "Bolt"}, {species: "cat", name: "Billy"}, ] }, { name: "Bob", animals: [ {species: "dog", name: "Snoopy"} ] } ]; ``` Now I want to filter based on the species. For example: every person which has a cat, should be returned: ``` const result = [ { name: "Joe", animals: [ {species: "dog", name: "Bolt"}, {species: "cat", name: "Billy"}, ] } ]; ``` I have tried with the the `filter()` method like this: ``` const result = persons.filter(p => p.animals.filter(s => s.species === 'cat')) ``` But this doesn't return the desired result (it returns both persons). How can I filter the array bases on an attribute of a nested array?<issue_comment>username_1: Your inner filter still returns a "truthy" value (empty array) for the dog person. Add `.length` so that no results becomes `0` ("falsey") `const result = persons.filter(p => p.animals.filter(s => s.species === 'cat').length)` **Edit:** Per comments and several other answers, since the goal is to get a truthy value from the inner loop, `.some` would get the job done even better because it directly returns true if any items match. `const result = persons.filter(p => p.animals.some(s => s.species === 'cat'))` Upvotes: 5 [selected_answer]<issue_comment>username_2: ```js const persons = [ { name: "Joe", animals: [ {species: "dog", name: "Bolt"}, {species: "cat", name: "Billy"}, ] }, { name: "Bob", animals: [ {species: "dog", name: "Snoopy"} ] } ]; Filter = function(arr, a){ return arr.filter(t=>t.animals.filter(y=>y.species==a).length>0); } console.log(Filter(persons, 'cat')) ``` Upvotes: 1 <issue_comment>username_3: You can use `filter()` with `some()` method to check if some of the objects in animals array has `species == cat`. ```js const persons = [{"name":"Joe","animals":[{"species":"dog","name":"Bolt"},{"species":"cat","name":"Billy"}]},{"name":"Bob","animals":[{"species":"dog","name":"Snoopy"}]}] const result = persons.filter(({animals}) => { return animals.some(({species}) => species == 'cat') }) console.log(result) ``` Upvotes: 0 <issue_comment>username_4: This should do the trick ``` persons.filter((person) => { return person.animals.filter((animal) => { return animal.species === 'cat'; }).length > 0; }); ``` Add the check on length, as filter returns an array, not a boolean. Upvotes: 0 <issue_comment>username_5: You might want to use some' ``` persons.filter(p => p.animals.some(s => s.species === 'cat')) ``` Upvotes: 2 <issue_comment>username_6: You havent checked against the length of the second filter. Filer only includes results that return a `true` value. So we need to provide the second filter with a way of returning true if it finds something. We can do that by saying, if the returned array is > 0 ```js const persons = [ { name: "Joe", animals: [ {species: "dog", name: "Bolt"}, {species: "cat", name: "Billy"}, ] }, { name: "Bob", animals: [ {species: "dog", name: "Snoopy"} ] } ]; const filtered = persons.filter(p => p.animals.filter(a => a.species === "cat").length > 0) console.log(filtered) ``` Upvotes: 0 <issue_comment>username_7: This should work! ``` const result = persons.filter(person => { let innerResult = person.animals.filter(animal => { return animal.species === 'cat'; }); return innerResult.length > 0; }); ``` Upvotes: 0
2018/03/14
1,302
4,407
<issue_start>username_0: I am interested in concatenate many files together based on the numeric number and also remove the first line. e.g. chr1\_smallfiles then chr2\_smallfiles then chr3\_smallfiles.... etc (each without the header) Note that `chr10_smallfiles` needs to come after `chr9_smallfiles` -- that is, this needs to be *numeric* sort order. --- When separate the two command awk and ls -v1, each does the job properly, but when put them together, it doesn't work. Please help thanks! ``` awk 'FNR>1' | ls -v1 chr*_smallfiles > bigfile ```<issue_comment>username_1: The issue is with the way that you're trying to pass the list of files to awk. At the moment, you're piping the output of awk to ls, which makes no sense. Bear in mind that, as mentioned in the comments, ls is a tool for interactive use, and in general its output shouldn't be parsed. If sorting weren't an issue, you could just use: ``` awk 'FNR > 1' chr*_smallfiles > bigfile ``` The shell will expand the glob `chr*_smallfiles` into a list of files, which are passed as arguments to awk. For each filename argument, all but the first line will be printed. Since you want to sort the files, things aren't quite so simple. If you're sure the full range of files exist, just replace `chr*_smallfiles` with `chr{1..99}_smallfiles` in the original command. --- Using some Bash-specific and GNU sort features, you can also achieve the sorting like this: ``` printf '%s\0' chr*_smallfiles | sort -z -n -k1.4 | xargs -0 awk 'FNR > 1' > bigfile ``` * `printf '%s\0'` prints each filename followed by a null-byte * `sort -z` sorts records separated by null-bytes * `-n -k1.4` does a numeric sort, starting from the 4th character (the numeric part of the filename) * `xargs -0` passes the sorted, null-separated output as arguments to awk --- Otherwise, if you want to go through the files in numerical order, and you're not sure whether all the files exist, then you can use a shell loop (although it'll be significantly slower than a single awk invocation): ``` for file in chr{1..99}_smallfiles; do # 99 is the maximum file number [ -f "$file" ] || continue # skip missing files awk 'FNR > 1' "$file" done > bigfile ``` Upvotes: 2 <issue_comment>username_2: You can also use `tail` to concatenate all the files without header ``` tail -q -n+2 chr*_smallfiles > bigfile ``` In case you want to concatenate the files in a **natural sort order** as described in your quesition, you can pipe the result of `ls -v1` to `xargs` using ``` ls -v1 chr*_smallfiles | xargs -d $'\n' tail -q -n+2 > bigfile ``` (Thanks to username_5) `xargs -d $'\n'` sets the delimiter to a newline `\n` in case the filename contains white spaces or quote characters Upvotes: 3 [selected_answer]<issue_comment>username_3: You can do with a for loop like below, which is working for me:- ``` for file in chr*_smallfiles do tail +2 "$file" >> bigfile done ``` How will it work? For loop read all the files from current directory with wild chard character `*` `chr*_smallfiles` and assign the file name to variable `file` and `tail +2 $file` will output all the lines of that file except the first line and append in file `bigfile`. So finally all files will be merged (accept the first line of each file) into one i.e. file `bigfile`. Upvotes: 1 <issue_comment>username_4: Just for completeness, how about a `sed` solution? ``` for file in chr*_smallfiles do sed -n '2,$p' $file >> bigfile done ``` Hope it helps! Upvotes: 0 <issue_comment>username_5: Using a bash 4 associative array to extract only the numeric substring of each filename; sort those individually; and then retrieve and concatenate the full names in the resulting order: ``` #!/usr/bin/env bash case $BASH_VERSION in ''|[123].*) echo "Requires bash 4.0 or newer" >&2; exit 1;; esac # when this is done, you'll have something like: # files=( [1]=chr_smallfiles1.txt # [10]=chr_smallfiles10.txt # [9]=chr_smallfiles9.txt ) declare -A files=( ) for f in chr*_smallfiles.txt; do files[${f//[![:digit:]]/}]=$f done # now, emit those indexes (1, 10, 9) to "sort -n -z" to sort them as numbers # then read those numbers, look up the filenames associated, and pass to awk. while read -r -d '' key; do awk 'FNR > 1' <"${files[$key]}" done < <(printf '%s\0' "${!files[@]}" | sort -n -z) >bigfile ``` Upvotes: 2