date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/14
1,496
5,153
<issue_start>username_0: I have an existing android app in the store, available for API 16+. I created a small module inside my app, using ARCore. If I update the app, and I use ARCore Required ( <https://developers.google.com/ar/distribute/> ), will my old APK still be available for devices that do not support ARCore? Let's say my current version code is 100. If I create an APK with ARCore Required, and version code 200, this new apk will be available for all devices that support ARCore, but apk version 100 will be served to all other devices, right? That's my understanding, I just wanted to confirm. Thanks. I guess the pain, for me, will be the fact that I will need to always create two builds, for example 101 and 201, and upload them both, when adding features that are common. However, I would prefer this, instead of serving the larger (maybe much larger) ARCore APK to all of my users.<issue_comment>username_1: 1.5m entries is not very big dataset. The dataset is sorted first. ``` proc sort data=have; by country currency id fixed performing; run; proc means data=have sum; var initial current; by country currency id fixed performing; output out=sum(drop=_:) sum(initial)=Initial sum(current)=Current; run; ``` Upvotes: 0 <issue_comment>username_2: Create an output data set from `Proc MEANS` and concatenate the variables in the result. MEANS with a BY statement requires sorted data. Your `have` does not. Concatenation of the aggregations key (those lovely categorical variables) into a single space separated key (not sure why you need to do that) can be done with `CATX` function. ``` data have_unsorted; length country $2 currency $3 id 8 type $8 evaluation $20 initial current 8; input country currency ID type evaluation initial current; datalines; UK GBP 1 Fixed Performing 100 50 UK GBP 1 Fixed Performing 150 30 UK GBP 1 Fixed Performing 160 70 UK GBP 1 Floating Performing 150 30 UK GBP 1 Floating Performing 115 80 UK GBP 1 Floating Performing 110 60 UK GBP 1 Fixed Non-Performing 100 50 UK GBP 1 Fixed Non-Performing 120 30 ; run; ``` **Way 1 - MEANS with CLASS/WAYS/OUTPUT, post process with data step** The cardinality of the class variables *may* cause problems. ``` proc means data=have_unsorted noprint; class country currency ID type evaluation ; ways 5; output out=sums sum(initial current)= / autoname; run; data want; set sums; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 2 - SORT followed by MEANS with BY/OUTPUT, post process with data step** BY statement requires sorted data. ``` proc sort data=have_unsorted out=have; by country currency ID type evaluation ; proc means data=have noprint; by country currency ID type evaluation ; output out=sums sum(initial current)= / autoname; run; data want; set sums; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 3 - MEANS, given data that is grouped but unsorted, with BY NOTSORTED/OUTPUT, post process with data step** The `have` rows will be processed in *clumps* of the `BY` variables. A clump is a sequence of contiguous rows that have the same by group. ``` proc means data=have_unsorted noprint; by country currency ID type evaluation NOTSORTED; output out=sums sum(initial current)= / autoname; run; data want; set sums; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 4 - DATA Step, DOW loop, BY NOTSORTED and key construction** The `have` rows will be processed in *clumps* of the `BY` variables. A clump is a sequence of contiguous rows that have the same by group. ``` data want_way4; do until (last.evaluation); set have; by country currency ID type evaluation NOTSORTED; initial_sum = SUM(initial_sum, initial); current_sum = SUM(current_sum, current); end; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 5 - Data Step hash** data can be processed with out a presort or clumping. In other words, data can be totally disordered. ``` data _null_; length key $50 initial_sum current_sum 8; if _n_ = 1 then do; call missing (key, initial_sum, current_sum); declare hash sums(); sums.defineKey('key'); sums.defineData('key','initial_sum','current_sum'); sums.defineDone(); end; set have_unsorted end=end; key = catx(' ',country,currency,ID,type,evaluation); rc = sums.find(); initial_sum = SUM(initial_sum, initial); current_sum = SUM(current_sum, current); sums.replace(); if end then sums.output(dataset:'have_way5'); run; ``` Upvotes: 1 <issue_comment>username_3: Props to paige miller ``` proc summary data=testa nway; var net_balance; class ID fixed_or_floating performing_status initial country currency ; output out=sumtest sum=sum_initial; run; ``` Upvotes: -1
2018/03/14
321
1,311
<issue_start>username_0: in my ios Project I have to improve the upload data performance hence I am uploading the data async in multiple threads, However `[HTTPMaximumConnectionsPerHost][1]` is 4 by default in iOS As I am using the shared session, i believe Its creating only 4 connections and which is making my threads to wait until the connection is available, **Can I set HTTPMaximumConnectionsPerHost to some 10 or to some max value ?** Is that ok if we have 10 simultaneous connections to the same host?<issue_comment>username_1: In **URLSessionConfiguration** you can set the **HTTPMaximumConnectionsPerHost** ``` let urlSessionConfiguration = URLSessionConfiguration() urlSessionConfiguration.httpMaximumConnectionsPerHost = 10 let session = URLSession(configuration: urlSessionConfiguration) ``` Upvotes: 0 <issue_comment>username_2: You may get request timeout error if system reduce number of connection. Even if you set `HTTPMaximumConnectionsPerHost` to 10, a session may use a lower limit. From [apple document](https://developer.apple.com/documentation/foundation/nsurlsessionconfiguration/1407597-httpmaximumconnectionsperhost?language=objc): > > Additionally, depending on your connection to the Internet, a session > may use a lower limit than the one you specify. > > > Upvotes: 1
2018/03/14
494
2,006
<issue_start>username_0: I have an disk storage that returns an `Object?` from disk (Could be already saved or not) and an BehaviourSubject (This data comes from other class call, check code below): Code is: ``` private val subject: Subject> = BehaviorSubject.create() fun getElements(): Observable> = Observable.concat(Observable.just(storage.getElement()), subject) .filter({ it.isPresent }) .take(1) .flatMapSingle { Observable.just(it.get()) .flatMapIterable { it.categories } .toList() } fun updateSubject(response: Response) { storage.save(response.element) //Save element in storage subject.onNext(response.element.toOptional()) } ``` My problem is, in other class I do ``` getElements().subscribe(onElements(), onError()); ``` First time, when storage has null it does nothing, even I've got a breakpoint in `subject.onNext(response.element.toOptional())`, hoping that `onNext` will trigger a stream for `getElements`, but nothing happens. Second time, when I've already saved in storage the received element (So, `storage.getElement()` returns something) it works fine. My functional description is: Get element from both cache and subject, take first that arrives, and return it (First time it will be who the comes subject one), next time, i'm hoping that first one will be the storage one.<issue_comment>username_1: Your null optional elements are being filtered out by your `.filter { it.isPresent }` call so nothing gets emitted downstream. Upvotes: -1 <issue_comment>username_2: I am assuming that storage is some sort of persistence object so it so `storage.getElement()` might return a valid object at the time you create the subject? If that is the case, then I think you should check to see if you have a stored object before you create the subject, if so use `BehaviorSubject.createDefalut(storedObject)` if it does exist or `BehaviorSubject.create()` if not. Then inside your `getElements()` function I think you can just use `subject.filter()` .... Upvotes: 0
2018/03/14
697
2,119
<issue_start>username_0: I have an ArrayList of ArrayLists and inside those ArrayLists I store Strings, and I would like to sort those string according to their priority that I store on a HashMap. Is there a efficient way to accomplish this task simply iterating over them?<issue_comment>username_1: If I understand you problem correctly, this should do the trick. If you consider this efficient is depending on the performance bounds you have to meet. ``` HashMap priorities = new HashMap(); priorities.put("A", 1); priorities.put("B", 2); priorities.put("C", 3); List> list = new ArrayList>(); list.add(Arrays.asList("A", "C", "B")); list.add(Arrays.asList("B", "A", "B")); list.add(Arrays.asList("A", "B", "C")); list.forEach(x -> x.sort((string1, string2) -> { return priorities.get(string1) - priorities.get(string2); })); System.out.println(list); ``` Upvotes: 0 <issue_comment>username_2: Given that you have a `HashMap` with priorities you could do this: ``` List> listList = new ArrayList<>(); List listOne = Arrays.asList("A", "B", "C", "D"); List listTwo = Arrays.asList("D", "D", "D", "B"); listList.add(listOne); listList.add(listTwo); Map map = new HashMap<>(); map.put("A", 1); map.put("B", 2); map.put("C", 1); map.put("D", 4); listList.forEach(list -> list.sort(Comparator.comparingInt(map::get))); ``` Note that for this to work, you have to have all the strings from list in your `Map` with priorities, otherwise you will get a `NullPointerException`. Upvotes: 2 [selected_answer]<issue_comment>username_3: ``` List arraylist = new ArrayList<>(); arraylist.add("a"); arraylist.add("a"); arraylist.add("a"); arraylist.add("b"); arraylist.add("c"); arraylist.add("c"); arraylist.add("c"); arraylist.add("c"); arraylist.add("d"); arraylist.add("d"); Map hashMap = new HashMap<>(); hashMap.put("a",3); hashMap.put("b",1); hashMap.put("c",4); hashMap.put("d",2); Collections.sort(arraylist, (o1, o2) -> { int i = hashMap.get(o1) < hashMap.get(o2) ? 1 : -1; return i; }); ``` this gives the following output: [c, c, c, c, a, a, a, d, d, b] Upvotes: 0
2018/03/14
1,337
2,940
<issue_start>username_0: I have a dataframe as show below: ``` df = index value1 value2 value3 001 0.3 1.3 4.5 002 1.1 2.5 3.7 003 0.1 0.9 7.8 .... 365 3.4 1.2 0.9 ``` the index means the days in a year( so sometimes the last number of index is 366), I want to group it with random days (for example 10 days or 30 days),I thinks the code would be as below, ``` df_new = df.groupby( "method" ).mean() ``` In some question I saw the they used type of datetime to groupby, however in my dataframe the index are just numbers, is there any better way to group it ? thanks in adavance !<issue_comment>username_1: I think need floor index values and aggregate mean: ``` df_new = df.groupby( df.index // 10).mean() ``` Another general solution if not default unique numeric index: ``` df_new = df.groupby( np.arange(len(df.index)) // 10).mean() ``` **Sample**: ``` c = 'val1 val2 val3'.split() df = pd.DataFrame(np.random.randint(10, size=(20,3)), columns=c) print (df) val1 val2 val3 0 5 9 4 1 5 7 1 2 8 3 5 3 2 4 2 4 2 8 4 5 8 5 6 6 0 9 8 7 2 3 6 8 7 0 0 9 3 3 5 10 6 6 3 11 8 9 6 12 5 1 6 13 1 5 9 14 1 4 5 15 3 2 2 16 4 5 4 17 3 5 1 18 9 4 5 19 9 8 7 df_new = df.groupby( df.index // 10).mean() print (df_new) val1 val2 val3 0 4.2 5.1 4.1 1 4.9 4.9 4.8 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Just create a new index via floored quotient operator `//` and group by this index. Here is an example with 155 rows. You can drop the original index for the result. ``` df = pd.DataFrame({'index': list(range(1, 156)), 'val1': np.random.rand(155), 'val2': np.random.rand(155), 'val3': np.random.rand(155)}) df['new_index'] = df['index'] // 10 res = df.groupby('new_index', as_index=False).mean().drop('index', 1) # new_index val1 val2 val3 # 0 0 0.315851 0.462080 0.491779 # 1 1 0.377690 0.566162 0.588248 # 2 2 0.314571 0.471430 0.626292 # 3 3 0.725548 0.572577 0.530589 # 4 4 0.569597 0.466964 0.443815 # 5 5 0.470747 0.394189 0.321107 # 6 6 0.362968 0.362278 0.415093 # 7 7 0.403529 0.626155 0.322582 # 8 8 0.555819 0.415741 0.525251 # 9 9 0.454660 0.336846 0.524158 # 10 10 0.435777 0.495191 0.380897 # 11 11 0.345916 0.550897 0.487255 # 12 12 0.676762 0.464794 0.612018 # 13 13 0.524610 0.450550 0.472724 # 14 14 0.466074 0.542736 0.680481 # 15 15 0.456921 0.565800 0.442543 ``` Upvotes: 1
2018/03/14
678
2,671
<issue_start>username_0: I have two imageviews with in my card view, now how can I setOnClickListeners so that I can know which button in which cardview is selected. im1 and im2 are my clickable ImageViews This is my code : ``` @Override public void onBindViewHolder(ViewHolder holder, int position) { ConnIfInfo dataModel = ifList.get(position); Log.d("Name", "if list name: "+dataModel.getName()); holder.name.setText(dataModel.getName()); holder.appName.setText(dataModel.getApp().toString()); if(String.valueOf(dataModel.getPreferredModeMode().toString()) .equals( String.valueOf(ProjectionTypes.OperationMode.AOA_AA))) { holder.im1.setImageResource(R.drawable.auto1); holder.im2.setImageResource(R.drawable.carlife); holder.im1.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { try { iPhoneProjectionManager.startApp(1,"Nexus 5"); } catch (RemoteException e) { e.printStackTrace(); } } }); holder.im2.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { try { iPhoneProjectionManager.startApp(2,"Nexus 6"); } catch (RemoteException e) { e.printStackTrace(); } } }); } } @Override public int getItemCount() { return ifList.size(); } ```<issue_comment>username_1: You can also do this by using Interface also For more details please check out [This link](https://stackoverflow.com/questions/44568927/how-to-take-textview-data-from-recycler-view-to-another-activity/44569191#44569191) contains the brief about the how to handle click event by using interface. Upvotes: 1 <issue_comment>username_2: You can set the tag to the view. The tag can be any object. ``` holder.im2.setTag(position); ``` you can set many tags also, with the key, and the key should be unique resId. you can dump the ids in ids.xml. Refer [this](https://developer.android.com/samples/BatchStepSensor/res/values/ids.html) ``` holder.im2.setTag(, position); ``` and you can get the tag as ``` holder.im2.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { v.getTag() //or if you set with key then v.getTag() // please check for null value } }); ``` instead of setting the onClickListener inside the onBindViewHolder you can do that in ViewHolder class itself Upvotes: 4 [selected_answer]
2018/03/14
314
1,288
<issue_start>username_0: Can I able to call a method which returns string inside an annotation. If so please guide me how to achieve this? I tried like this but this doesn't work for me. ``` @Description(value = Resource.getWord("key")) ```<issue_comment>username_1: An annotation only takes compile time constants (as they might be used during compile time), therefore you cannot make any calculation within the definition, as they are unknown during the compile time. Allowed constant types are (taken from [java-annotation-members](https://stackoverflow.com/questions/1458535/which-types-can-be-used-for-java-annotation-members)): * Primitive * String * Class * Enum * Another Annotation * An array of any of the above **Possible solution for your situation:** As I understand you would like to localize the `@Description` content. As this is only meant to be exposed to other developers anyway, you are safe to simply use English, in my opinion. Localization is for the end user, not the developer. Upvotes: 3 <issue_comment>username_2: I can imagine an aspect being wired up to process methods annotated like this, where the "key" is in the annotation, and the aspect processing then uses the key at run time... but I'm not sure this is what you're looking for. Upvotes: 0
2018/03/14
4,184
14,469
<issue_start>username_0: What am I doing: * User clicks a button, a `FileUpload` component (dialog) fires up and he can browse for and load a file from his PC. * When he clicks ok the file gets saved to the disk, in a specific location. * Prior to saving I'm renaming (or rather, saving with a specific name) his file using some string that contain data I previously pulled from some DB fields. Hence, regardless of the name the file has when the user loads it, it gets saved to the disk with his `Firstname` and `LastName`, which I get from some string variables. `UniMainModule.foldername` = contains the path to the folder where the file gets saved. `UniMainModule.FirstName` = contains the user's FirstName `UniMainModule.LastName` = contains the user's LastName Thus, the file gets saved as `FirstName_LastName.pdf` on the disk at location provided by `foldername` string. This is the code I'm using: ``` procedure TsomeForm.UniFileUpload1Completed(Sender: TObject; AStream: TFileStream); var DestName : string; DestFolder : string; begin DestFolder:=UniServerModule.StartPath+'files\'+UniMainModule.foldername+'\'; DestName:=DestFolder+UniMainModule.FirstName+'_'+UniMainModule.LastName+'.pdf'; CopyFile(PChar(AStream.FileName), PChar(DestName), False); ModalResult:= mrOk; end; ``` As I understand it, after reading a bit about `CopyFile` on `msdn` passing `False` means that it should and will overwrite the existing file. If the file isn't already present with that name in that location, it's fine, it gets saved. But if the user decides to use the fileupload again and upload a new file, the new file will overwrite the previous one. Since they're being saved with the same name. How then can you ensure that if the file already exists (a file with that exact name is present in the location) it doesn't get overwritten but, I don't know, gets assigned a (1) in the name or something, keeping both files?<issue_comment>username_1: You have a file name, so use FileExists to check if file exists. If it does append a (1) to the file name and try again. repeat for increasing n until you get a file name that does not exist. so, a bit like this: ``` procedure TsomeForm.UniFileUpload1Completed(Sender: TObject; AStream: TFileStream); var DestName : string; DestFolder : string; n : integer; additional : string; begin DestFolder:=UniServerModule.StartPath+'files\'+UniMainModule.foldername+'\'; DestName:=DestFolder+UniMainModule.FirstName+'_'+UniMainModule.LastName; n := 0; additional :='.pdf'; while FileExists( DestName + additional ) do begin inc(n); additional := '(' + intToStr(n) + ')'+'.pdf'; end; CopyFile(PChar(AStream.FileName), PChar(DestName + additional), False); ModalResult:= mrOk; end; ``` Upvotes: 3 <issue_comment>username_2: Here is my take on a solution ``` procedure TsomeForm.UniFileUpload1Completed(Sender: TObject; AStream: TFileStream); var DestName, NewName : string; DestFolder : string; Cnt: integer; begin DestFolder:=UniServerModule.StartPath+'files\'+UniMainModule.foldername+'\'; DestName:=DestFolder+UniMainModule.FirstName+'_'+UniMainModule.LastName+'.pdf'; if FileExists(DestName) then begin Cnt:=0; repeat Inc(Cnt); NewName:=Format(DestFolder+UniMainModule.FirstName+'_'+UniMainModule.LastName+'(%d).pdf',[Cnt]); until not FileExists(NewName); DestName:=NewName; end; CopyFile(PChar(AStream.FileName), PChar(DestName), False); ModalResult:= mrOk; end; ``` Upvotes: 2 <issue_comment>username_3: Call `CopyFile()` in a loop, setting its `bFailIfExists` parameter to `TRUE` so you can retry with a new filename if `CopyFile()` fails with an `ERROR_FILE_EXISTS` error code. For example: ``` procedure TsomeForm.UniFileUpload1Completed(Sender: TObject; AStream: TFileStream); var DestName : string; DestFolder : string; n : integer; begin DestFolder := UniServerModule.StartPath + 'files\' + UniMainModule.foldername + '\'; DestName := UniMainModule.FirstName + '_' + UniMainModule.LastName + '.pdf'; n := 0; while not CopyFile(PChar(AStream.FileName), PChar(DestFolder + DestName), True) do begin if GetLastError() <> ERROR_FILE_EXISTS then begin // error handling... Break; end; Inc(n); DestName := UniMainModule.FirstName + '_' + UniMainModule.LastName + ' (' + IntToStr(n) + ').pdf'; end; ModalResult := mrOk; end; ``` However, rather than handling this manually, you should let the OS do the work for you. Especially since the OS has its own way to renaming copied files, and that naming scheme can change (and has) from one OS version to another. Instead of using `CopyFile()`, use [`SHFileOperation()`](https://msdn.microsoft.com/en-us/library/windows/desktop/bb762164.aspx) instead, which has a `FOF_RENAMEONCOLLISION` flag: > > Give the file being operated on a new name in a move, copy, or rename operation if a file with the target name already exists at the destination. > > > For example: ``` uses ..., Winapi.ShellAPI; procedure TsomeForm.UniFileUpload1Completed(Sender: TObject; AStream: TFileStream); var DestName : string; DestFolder : string; fo : TSHFileOpStruct; begin DestFolder := UniServerModule.StartPath + 'files\' + UniMainModule.foldername + '\'; DestName := DestFolder + UniMainModule.FirstName + '_' + UniMainModule.LastName + '.pdf'; ZeroMemory(@fo, SizeOf(fo)); fo.Wnd := Handle; fo.wFunc := FO_COPY; fo.pFrom := PChar(AStream.FileName+#0); fo.pTo := PChar(DestName+#0); fo.fFlags := FOF_SILENT or FOF_NOCONFIRMATION or FOF_NOERRORUI or FOF_NOCONFIRMMKDIR or FOF_RENAMEONCOLLISION; if SHFileOperation(fo) <> 0 then begin // error handling... end else if fo.fAnyOperationsAborted then begin // abort handling ... end; ModalResult := mrOk; end; ``` If you need to know what the OS picked for the renamed filename, there is also a `FOF_WANTMAPPINGHANDLE` flag: > > If FOF\_RENAMEONCOLLISION is specified and any files were renamed, assign a name mapping object that contains their old and new names to the `hNameMappings` member. This object must be freed using [`SHFreeNameMappings`](https://msdn.microsoft.com/en-us/library/windows/desktop/bb762171.aspx) when it is no longer needed. > > > For example: ``` uses ..., Winapi.ShellAPI; type PHandleToMappings = ^THandleToMappings; THandleToMappings = record uNumberOfMappings: UINT; // Number of mappings in the array. lpSHNameMappings: array[0..0] of PSHNAMEMAPPINGW; // array of pointers to mappings. end; procedure TsomeForm.UniFileUpload1Completed(Sender: TObject; AStream: TFileStream); var DestName : string; DestFolder : string; fo : TSHFileOpStruct; pMappings : PHandleToMappings; pMapping : PSHNAMEMAPPINGW; begin DestFolder := UniServerModule.StartPath + 'files\' + UniMainModule.foldername + '\'; DestName := DestFolder + UniMainModule.FirstName + '_' + UniMainModule.LastName + '.pdf'; ZeroMemory(@fo, SizeOf(fo)); fo.Wnd := Handle; fo.wFunc := FO_COPY; fo.pFrom := PChar(AStream.FileName+#0); fo.pTo := PChar(DestName+#0); fo.fFlags := FOF_SILENT or FOF_NOCONFIRMATION or FOF_NOERRORUI or FOF_NOCONFIRMMKDIR or FOF_RENAMEONCOLLISION or FOF_WANTMAPPINGHANDLE; if SHFileOperation(fo) <> 0 then begin // error handling... end else begin if fo.fAnyOperationsAborted then begin // abort handling... end; if fo.hNameMappings <> nil then begin try pMappings := PHandleToMappings(fo.hNameMappings); pMapping := pMappings^.lpSHNameMappings[0]; SetString(DestName, pMapping^.pszNewPath, pMapping^.cchNewPath); finally SHFreeNameMappings(THandle(fo.hNameMappings)); end; // use DestName as needed... end; end; ModalResult := mrOk; end; ``` On Vista and later, you can alternatively use [`IFileOperation.CopyItem()`](https://msdn.microsoft.com/en-us/library/windows/desktop/bb775761.aspx) instead, which also supports renaming an item on collision. An [`IFileOperationProgressSink`](https://msdn.microsoft.com/en-us/library/windows/desktop/bb775722.aspx) callback can be used to discover the new filename if a rename collision occurs. For example: ``` uses ..., Winapi.ActiveX, Winapi.ShlObj, System.Win.Comobj; type TMyCopyProgressSink = class(TInterfacedObject, IFileOperationProgressSink) public CopiedName: string; function StartOperations: HResult; stdcall; function FinishOperations(hrResult: HResult): HResult; stdcall; function PreRenameItem(dwFlags: DWORD; const psiItem: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; function PostRenameItem(dwFlags: DWORD; const psiItem: IShellItem; pszNewName: LPCWSTR; hrRename: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; function PreMoveItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; function PostMoveItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR; hrMove: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; function PreCopyItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; function PostCopyItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR; hrCopy: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; function PreDeleteItem(dwFlags: DWORD; const psiItem: IShellItem): HResult; stdcall; function PostDeleteItem(dwFlags: DWORD; const psiItem: IShellItem; hrDelete: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; function PreNewItem(dwFlags: DWORD; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; function PostNewItem(dwFlags: DWORD; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR; pszTemplateName: LPCWSTR; dwFileAttributes: DWORD; hrNew: HResult; const psiNewItem: IShellItem): HResult; stdcall; function UpdateProgress(iWorkTotal: UINT; iWorkSoFar: UINT): HResult; stdcall; function ResetTimer: HResult; stdcall; function PauseTimer: HResult; stdcall; function ResumeTimer: HResult; stdcall; end; function TMyCopyProgressSink.StartOperations: HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.FinishOperations(hrResult: HResult): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PreRenameItem(dwFlags: DWORD; const psiItem: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PostRenameItem(dwFlags: DWORD; const psiItem: IShellItem; pszNewName: LPCWSTR; hrRename: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PreMoveItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PostMoveItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR; hrMove: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PreCopyItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PostCopyItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR; hrCopy: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; begin CopiedName := pszNewName; Result := S_OK; end; function TMyCopyProgressSink.PreDeleteItem(dwFlags: DWORD; const psiItem: IShellItem): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PostDeleteItem(dwFlags: DWORD; const psiItem: IShellItem; hrDelete: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PreNewItem(dwFlags: DWORD; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PostNewItem(dwFlags: DWORD; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR; pszTemplateName: LPCWSTR; dwFileAttributes: DWORD; hrNew: HResult; const psiNewItem: IShellItem): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.UpdateProgress(iWorkTotal: UINT; iWorkSoFar: UINT): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.ResetTimer: HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PauseTimer: HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.ResumeTimer: HResult; stdcall; begin Result := S_OK; end; procedure TsomeForm.UniFileUpload1Completed(Sender: TObject; AStream: TFileStream); var DestName : string; DestFolder : string; pfo : IFileOperation; psiFrom : IShellItem; psiTo : IShellItem; Sink : IFileOperationProgressSink; bAborted : BOOL; begin DestFolder := UniServerModule.StartPath + 'files\' + UniMainModule.foldername + '\'; DestName := UniMainModule.FirstName + '_' + UniMainModule.LastName + '.pdf'; try OleCheck(SHCreateItemFromParsingName(PChar(AStream.FileName), nil, IShellItem, psiFrom)); OleCheck(SHCreateItemFromParsingName(PChar(DestFolder), nil, IShellItem, psiTo)); OleCheck(CoCreateInstance(CLSID_FileOperation, nil, CLSCTX_ALL, IFileOperation, pfo)); OleCheck(pfo.SetOperationFlags(FOF_SILENT or FOF_NOCONFIRMATION or FOF_NOCONFIRMMKDIR or FOF_NOERRORUI or FOF_RENAMEONCOLLISION or FOFX_PRESERVEFILEEXTENSIONS)); Sink := TMyCopyProgressSink.Create; OleCheck(pfo.CopyItem(psiFrom, psiTo, PChar(DestName), Sink)); OleCheck(pfo.PerformOperations()); pfo.GetAnyOperationsAborted(bAborted); if bAborted then begin // abort handling... end; DestName := TMyCopyProgressSink(Sink).CopiedName; // use DestName as needed... except // error handling... end; end; ``` Upvotes: 5 [selected_answer]
2018/03/14
4,254
14,462
<issue_start>username_0: I am trying to download records from twitter using `rtweet`. One issue with this is the twitter server needs to wait 15minutes every 18000 records. So, after record number 18000, I receive a data frame with all the records and a nice warning telling me to wait for a bit. `search_tweets` has an function argument to download more than 18000 records called `retryonratelimit`. However, this isnt working so I am exploring other options. I have produced a function, incorporating `tryCatch` to address this. However, when the warning at 18000 records pops up, tryCatch is saving the warning rather than the data frame which should be spit out before the warning. Something it would not do if 17999 records were downloaded ``` library(rtweet) library(RDCOMClient) library(profvis) TwitScrape = function(SearchTerm){ ReturnDF = tryCatch({ TempList=NULL Temp = search_tweets(SearchTerm,n=18000) TempList = list(as.data.frame(Temp), SearchTerm) return(TempList) }, warning = function(TempList){ Comb=NULL MAXID = min(TempList[[1]]$status_id) message("Delay for 15 minutes to accommodate server download limits") pause(901) TempWarn = search_tweets(TempList[[2]],n=18000, max_id=MAXID) TempWarn = as.data.frame(TempWarn) Comb = rbind(TempList[[1]], TempWarn) CombList = list(Comb, TempList[[2]]) return(CombList) } ) } Searches = c("#MUFC","#LFC", "#MCFC") TestExpandList=NULL TestExpand=NULL TestExpand2=NULL for (i in seq_along(Searches)){ TestExpandList = TwitScrape(SearchTerm = Searches[i]) TestExpand = TestExpandList[[1]] TestExpand$Cat = Searches[i] TestExpand$DownloadDate = Sys.Date() TestExpand2 = rbind(TestExpand2, TestExpand) } ``` I hope this makes sense. If I can offer any more information please let me know. In summary, why is `tryCatch` saving my warning rather than the data frame I want?<issue_comment>username_1: You have a file name, so use FileExists to check if file exists. If it does append a (1) to the file name and try again. repeat for increasing n until you get a file name that does not exist. so, a bit like this: ``` procedure TsomeForm.UniFileUpload1Completed(Sender: TObject; AStream: TFileStream); var DestName : string; DestFolder : string; n : integer; additional : string; begin DestFolder:=UniServerModule.StartPath+'files\'+UniMainModule.foldername+'\'; DestName:=DestFolder+UniMainModule.FirstName+'_'+UniMainModule.LastName; n := 0; additional :='.pdf'; while FileExists( DestName + additional ) do begin inc(n); additional := '(' + intToStr(n) + ')'+'.pdf'; end; CopyFile(PChar(AStream.FileName), PChar(DestName + additional), False); ModalResult:= mrOk; end; ``` Upvotes: 3 <issue_comment>username_2: Here is my take on a solution ``` procedure TsomeForm.UniFileUpload1Completed(Sender: TObject; AStream: TFileStream); var DestName, NewName : string; DestFolder : string; Cnt: integer; begin DestFolder:=UniServerModule.StartPath+'files\'+UniMainModule.foldername+'\'; DestName:=DestFolder+UniMainModule.FirstName+'_'+UniMainModule.LastName+'.pdf'; if FileExists(DestName) then begin Cnt:=0; repeat Inc(Cnt); NewName:=Format(DestFolder+UniMainModule.FirstName+'_'+UniMainModule.LastName+'(%d).pdf',[Cnt]); until not FileExists(NewName); DestName:=NewName; end; CopyFile(PChar(AStream.FileName), PChar(DestName), False); ModalResult:= mrOk; end; ``` Upvotes: 2 <issue_comment>username_3: Call `CopyFile()` in a loop, setting its `bFailIfExists` parameter to `TRUE` so you can retry with a new filename if `CopyFile()` fails with an `ERROR_FILE_EXISTS` error code. For example: ``` procedure TsomeForm.UniFileUpload1Completed(Sender: TObject; AStream: TFileStream); var DestName : string; DestFolder : string; n : integer; begin DestFolder := UniServerModule.StartPath + 'files\' + UniMainModule.foldername + '\'; DestName := UniMainModule.FirstName + '_' + UniMainModule.LastName + '.pdf'; n := 0; while not CopyFile(PChar(AStream.FileName), PChar(DestFolder + DestName), True) do begin if GetLastError() <> ERROR_FILE_EXISTS then begin // error handling... Break; end; Inc(n); DestName := UniMainModule.FirstName + '_' + UniMainModule.LastName + ' (' + IntToStr(n) + ').pdf'; end; ModalResult := mrOk; end; ``` However, rather than handling this manually, you should let the OS do the work for you. Especially since the OS has its own way to renaming copied files, and that naming scheme can change (and has) from one OS version to another. Instead of using `CopyFile()`, use [`SHFileOperation()`](https://msdn.microsoft.com/en-us/library/windows/desktop/bb762164.aspx) instead, which has a `FOF_RENAMEONCOLLISION` flag: > > Give the file being operated on a new name in a move, copy, or rename operation if a file with the target name already exists at the destination. > > > For example: ``` uses ..., Winapi.ShellAPI; procedure TsomeForm.UniFileUpload1Completed(Sender: TObject; AStream: TFileStream); var DestName : string; DestFolder : string; fo : TSHFileOpStruct; begin DestFolder := UniServerModule.StartPath + 'files\' + UniMainModule.foldername + '\'; DestName := DestFolder + UniMainModule.FirstName + '_' + UniMainModule.LastName + '.pdf'; ZeroMemory(@fo, SizeOf(fo)); fo.Wnd := Handle; fo.wFunc := FO_COPY; fo.pFrom := PChar(AStream.FileName+#0); fo.pTo := PChar(DestName+#0); fo.fFlags := FOF_SILENT or FOF_NOCONFIRMATION or FOF_NOERRORUI or FOF_NOCONFIRMMKDIR or FOF_RENAMEONCOLLISION; if SHFileOperation(fo) <> 0 then begin // error handling... end else if fo.fAnyOperationsAborted then begin // abort handling ... end; ModalResult := mrOk; end; ``` If you need to know what the OS picked for the renamed filename, there is also a `FOF_WANTMAPPINGHANDLE` flag: > > If FOF\_RENAMEONCOLLISION is specified and any files were renamed, assign a name mapping object that contains their old and new names to the `hNameMappings` member. This object must be freed using [`SHFreeNameMappings`](https://msdn.microsoft.com/en-us/library/windows/desktop/bb762171.aspx) when it is no longer needed. > > > For example: ``` uses ..., Winapi.ShellAPI; type PHandleToMappings = ^THandleToMappings; THandleToMappings = record uNumberOfMappings: UINT; // Number of mappings in the array. lpSHNameMappings: array[0..0] of PSHNAMEMAPPINGW; // array of pointers to mappings. end; procedure TsomeForm.UniFileUpload1Completed(Sender: TObject; AStream: TFileStream); var DestName : string; DestFolder : string; fo : TSHFileOpStruct; pMappings : PHandleToMappings; pMapping : PSHNAMEMAPPINGW; begin DestFolder := UniServerModule.StartPath + 'files\' + UniMainModule.foldername + '\'; DestName := DestFolder + UniMainModule.FirstName + '_' + UniMainModule.LastName + '.pdf'; ZeroMemory(@fo, SizeOf(fo)); fo.Wnd := Handle; fo.wFunc := FO_COPY; fo.pFrom := PChar(AStream.FileName+#0); fo.pTo := PChar(DestName+#0); fo.fFlags := FOF_SILENT or FOF_NOCONFIRMATION or FOF_NOERRORUI or FOF_NOCONFIRMMKDIR or FOF_RENAMEONCOLLISION or FOF_WANTMAPPINGHANDLE; if SHFileOperation(fo) <> 0 then begin // error handling... end else begin if fo.fAnyOperationsAborted then begin // abort handling... end; if fo.hNameMappings <> nil then begin try pMappings := PHandleToMappings(fo.hNameMappings); pMapping := pMappings^.lpSHNameMappings[0]; SetString(DestName, pMapping^.pszNewPath, pMapping^.cchNewPath); finally SHFreeNameMappings(THandle(fo.hNameMappings)); end; // use DestName as needed... end; end; ModalResult := mrOk; end; ``` On Vista and later, you can alternatively use [`IFileOperation.CopyItem()`](https://msdn.microsoft.com/en-us/library/windows/desktop/bb775761.aspx) instead, which also supports renaming an item on collision. An [`IFileOperationProgressSink`](https://msdn.microsoft.com/en-us/library/windows/desktop/bb775722.aspx) callback can be used to discover the new filename if a rename collision occurs. For example: ``` uses ..., Winapi.ActiveX, Winapi.ShlObj, System.Win.Comobj; type TMyCopyProgressSink = class(TInterfacedObject, IFileOperationProgressSink) public CopiedName: string; function StartOperations: HResult; stdcall; function FinishOperations(hrResult: HResult): HResult; stdcall; function PreRenameItem(dwFlags: DWORD; const psiItem: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; function PostRenameItem(dwFlags: DWORD; const psiItem: IShellItem; pszNewName: LPCWSTR; hrRename: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; function PreMoveItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; function PostMoveItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR; hrMove: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; function PreCopyItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; function PostCopyItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR; hrCopy: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; function PreDeleteItem(dwFlags: DWORD; const psiItem: IShellItem): HResult; stdcall; function PostDeleteItem(dwFlags: DWORD; const psiItem: IShellItem; hrDelete: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; function PreNewItem(dwFlags: DWORD; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; function PostNewItem(dwFlags: DWORD; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR; pszTemplateName: LPCWSTR; dwFileAttributes: DWORD; hrNew: HResult; const psiNewItem: IShellItem): HResult; stdcall; function UpdateProgress(iWorkTotal: UINT; iWorkSoFar: UINT): HResult; stdcall; function ResetTimer: HResult; stdcall; function PauseTimer: HResult; stdcall; function ResumeTimer: HResult; stdcall; end; function TMyCopyProgressSink.StartOperations: HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.FinishOperations(hrResult: HResult): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PreRenameItem(dwFlags: DWORD; const psiItem: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PostRenameItem(dwFlags: DWORD; const psiItem: IShellItem; pszNewName: LPCWSTR; hrRename: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PreMoveItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PostMoveItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR; hrMove: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PreCopyItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PostCopyItem(dwFlags: DWORD; const psiItem: IShellItem; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR; hrCopy: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; begin CopiedName := pszNewName; Result := S_OK; end; function TMyCopyProgressSink.PreDeleteItem(dwFlags: DWORD; const psiItem: IShellItem): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PostDeleteItem(dwFlags: DWORD; const psiItem: IShellItem; hrDelete: HResult; const psiNewlyCreated: IShellItem): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PreNewItem(dwFlags: DWORD; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PostNewItem(dwFlags: DWORD; const psiDestinationFolder: IShellItem; pszNewName: LPCWSTR; pszTemplateName: LPCWSTR; dwFileAttributes: DWORD; hrNew: HResult; const psiNewItem: IShellItem): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.UpdateProgress(iWorkTotal: UINT; iWorkSoFar: UINT): HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.ResetTimer: HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.PauseTimer: HResult; stdcall; begin Result := S_OK; end; function TMyCopyProgressSink.ResumeTimer: HResult; stdcall; begin Result := S_OK; end; procedure TsomeForm.UniFileUpload1Completed(Sender: TObject; AStream: TFileStream); var DestName : string; DestFolder : string; pfo : IFileOperation; psiFrom : IShellItem; psiTo : IShellItem; Sink : IFileOperationProgressSink; bAborted : BOOL; begin DestFolder := UniServerModule.StartPath + 'files\' + UniMainModule.foldername + '\'; DestName := UniMainModule.FirstName + '_' + UniMainModule.LastName + '.pdf'; try OleCheck(SHCreateItemFromParsingName(PChar(AStream.FileName), nil, IShellItem, psiFrom)); OleCheck(SHCreateItemFromParsingName(PChar(DestFolder), nil, IShellItem, psiTo)); OleCheck(CoCreateInstance(CLSID_FileOperation, nil, CLSCTX_ALL, IFileOperation, pfo)); OleCheck(pfo.SetOperationFlags(FOF_SILENT or FOF_NOCONFIRMATION or FOF_NOCONFIRMMKDIR or FOF_NOERRORUI or FOF_RENAMEONCOLLISION or FOFX_PRESERVEFILEEXTENSIONS)); Sink := TMyCopyProgressSink.Create; OleCheck(pfo.CopyItem(psiFrom, psiTo, PChar(DestName), Sink)); OleCheck(pfo.PerformOperations()); pfo.GetAnyOperationsAborted(bAborted); if bAborted then begin // abort handling... end; DestName := TMyCopyProgressSink(Sink).CopiedName; // use DestName as needed... except // error handling... end; end; ``` Upvotes: 5 [selected_answer]
2018/03/14
560
1,856
<issue_start>username_0: I am trying to get the following text "Hug­gies Pure Baby Wipes 4 x 64 per pack" shown in the code below. ``` document.write(getContents('wF8UD9Jj8:6D !FC6 q23J (:A6D c I ec A6C A24\<')); Hug­gies Pure Baby Wipes 4 x 64 per pack ``` I have tried using code such as: ``` foreach($element -> find('.offerList-item-description-title') as $title) { foreach($element -> find('text') as $text){ echo $text; } } ``` But just get returned an empty string, any suggestions? Thanks.<issue_comment>username_1: I'm not sure what code your using in your example (and I suspect the getContents function result gets in the way of your method for retrieving the text) but if you wrap the text you're after in a like so: ``` document.write(getContents('wF8UD9Jj8:6D !FC6 q23J (:A6D c I ec A6C A24\<')); Hug­gies Pure Baby Wipes 4 x 64 per pack ``` you can retrieve it using javascript: ``` var $title = document.getElementsByClassName("offerList-item-description-title"); for (var i = 0; i < $title.length; i++) { var span = $title[i].getElementsByTagName("span"); var $text = span[0].innerText || span[0].textContent; //echo $text; console.log("==> " + $text); } ``` Upvotes: 0 <issue_comment>username_2: If you are aware your HTML returned by your scraper does not contain Javascript rendered code, like in your case text is generated by javascript that's why you are getting empty response. What you need is a headless browser like **PhantomJS** you can use PHP wrapper of PhantomJS <http://jonnnnyw.github.io/php-phantomjs/>. This will solve your problem. It has following features: * Load webpages through the PhantomJS headless browser * View detailed response data including page content, headers, status code etc. * Handle redirects * View javascript console errors Hope this helps. Upvotes: 1
2018/03/14
1,829
6,917
<issue_start>username_0: There is a recurring problem regarding status fields and similar predefined set of values. Let's take an example of an ordering system with an order entity which has a status that could be New, In Progress, Paid, etc. The problem: ------------ The Status of an order need to be * stored (in database) * processed (in backend) * communicated (to frontend in web service API) How to do these three activities while keeping: * Preserve the meaning of the status. * efficient storage. Here are some example implementations with their pros and cons: 1- Status Table --------------- * The database will contain a status table with id, name * Order table references the id of the status. ``` CREATE TABLE `status` ( `id` INT NOT NULL, `name` VARCHAR(45) NOT NULL, PRIMARY KEY (`id`)); CREATE TABLE IF NOT EXISTS `order` ( `id` INT NOT NULL AUTOINCREMENT, `status_id` INT NOT NULL, PRIMARY KEY (`id`), INDEX `order_status_idx` (`status` ASC), CONSTRAINT `order_status_id` FOREIGN KEY (`status_id`) REFERENCES `status` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION); ``` * The backend code has an enum that gives these predefined integers a meaning in the code ``` enum Status { PAID = 7; }; // While processing as action ... order.status = Status::PAID; ``` * The web service API will return the status number ``` order: { id: 1, status_id: 7 } ``` * The frontend code has a similar enum that gives these predefined integers a meaning in the code. (like the backend code) * **Pros:** + The database is well defined and normalized * **Cons:** + The mapping between the status number and meaning is done in three places which gives space for human errors and inconsistency in defining the meaning of a specific status number. + The returned data from the API is not descriptive because `status_id: 7` does not deliver a concrete meaning because it does not include the meaning of the `status_id: 7` 2- Status ENUM -------------- * In database, the order table will contain a status columns with type ENUM containing the predefined statuses. ``` CREATE TABLE IF NOT EXISTS `order` ( `id` INT NOT NULL AUTOINCREMENT, `status` ENUM('PAID') NULL, PRIMARY KEY (`id`)); ``` * The backend code has constant values as code artifacts for the predefined status ``` enum Status { PAID = 'PAID' }; ``` OR ``` class Status { public: static const string PAID = PAID; }; ``` To Be used as follwoing ``` // While processing as action ... order.status = Status::PAID; ``` * The web service API will return the status constant ``` order: { id: 1, status: 'PAID' } ``` * The frontend code will have a similar construct for predefined status constants. (like the backend code) * **Pros:** + The database is well defined and normalized + The returned data from the API is descriptive and deliver the required meaning. + The status constants used already contain their meaning which reduces the chances of errors. * **Cons:** + Using an ENUM type for a column in database has its limitations. Adding a new status constant to that enum later using an ALTER command is expensive specially for huge tables like `order` table. 3- My proposed solution: ------------------------ * The database will contain a status table with one field called `key` with type string which is the primary key of this table. ``` CREATE TABLE `status` ( `key` VARCHAR(45) NOT NULL, PRIMARY KEY (`key`)); ``` * The order table will contain a field called `status` with type string which references the `key` field of the `status` table. ``` CREATE TABLE IF NOT EXISTS `order` ( `id` INT NOT NULL AUTOINCREMENT, `status` VARCHAR(45) NOT NULL, PRIMARY KEY (`id`), INDEX `order_status_idx` (`status` ASC), CONSTRAINT `order_status` FOREIGN KEY (`status`) REFERENCES `status` (`key`) ON DELETE NO ACTION ON UPDATE NO ACTION); ``` * The backend code has constant values as code artifacts for the predefined status ``` enum Status { PAID = 'PAID' }; ``` OR ``` class Status { public: static const string PAID = PAID; }; ``` To Be used as follwoing ``` // While processing as action ... order.status = Status::PAID; ``` * The web service API will return the status constant ``` order: { id: 1, status: 'PAID' } ``` * The frontend code will have a similar construct for predefined status constants. (like the backend code) * **Pros:** + The database is well defined and normalized + The returned data from the API is descriptive and deliver the required meaning. + The status constants used already contain their meaning which reduces the chances of errors. + Adding a new status constant is simple with INSERT command in the status table. * **Cons:** + ??? I'd like to know if this is a feasible solution or there is a better solution for this recurring problem. --------------------------------------------------------------------------------------------------------- ### Please include reasons why the proposed solution is bad and why your better solution is better Thank you.<issue_comment>username_1: This my approach for this problem: 1. I add a column `status` with type `string` in the `orders` table. 2. Define the constant of all your statuses in your class so you can reference them easily. 3. Make a validation rule on creation of order that the status value is in the only allowed ones you defines earlier. This makes adding a new status very easily by just editing your code base, and the retrieved value for the status is still a string (descriptive). I hope this answer your question. Upvotes: 1 <issue_comment>username_2: I suggest this: 1. Store in DB as status(unsigned tinyint, char(5)). 2. Id must be powers of 2: 1,2,4,8,... 3. At backend code const name must be humanized, but value -- int: `const PAID = 2` 4. At backend you should not use consts directly, but use status class object, which will contain some methods like `value` and `name`. 5. This class's test will check that all of it's values are in DB and all DB's values are covered by class. > > space for human errors > > > Tests invented to avoid human errors. Statuses are usually not so complex and has not so many values to mess with them. Enum is evil. <http://komlenic.com/244/8-reasons-why-mysqls-enum-data-type-is-evil/> Regarding your proposal: > > The database is well defined and normalized > > > No. It's denormalized. > > The returned data from the API is descriptive and deliver the required meaning. > > > You always can use wrapper, that goes into status table to get human name. > > The status constants used already contain their meaning which reduces the chances of errors. > > > Const name are for humans and values are for Benders. > > Adding a new status constant is simple with INSERT command in the status table. > > > Same in 1st and my solution. Upvotes: 0
2018/03/14
1,149
4,215
<issue_start>username_0: Im using a tableview to display an array of strings. When I click on a particular row, I want it to be highlighted with a checkmark. When I deselect a row, I want the checkmark to be removed. When I press a button, I want the rows that are currently highlighted to be passed out in an array(newFruitList). My problem is that when I click the first row, the last is highlighted. When I uncheck the first row, the last is unchecked, as if they are the same cell? How do I overcome this? Also, the way I am adding and removing from my new array, is this the correct way to go about doing this? Thanks My Code: ``` class BookingViewController: UIViewController, ARSKViewDelegate, UITextFieldDelegate, UITableViewDelegate, UITableViewDataSource { @IBOutlet weak var table: UITableView! let fruits = ["Apples", "Oranges", "Grapes", "Watermelon", "Peaches"] var newFruitList:[String] = [] override func viewDidLoad() { super.viewDidLoad() self.table.dataSource = self self.table.delegate = self self.table.allowsMultipleSelection = true } func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return fruits.count } func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { var cell = table.dequeueReusableCell(withIdentifier: "Cell") if cell == nil{ cell = UITableViewCell(style: .subtitle, reuseIdentifier: "Cell") } cell?.textLabel?.text = fruits[indexPath.row] return cell! } func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { newFruitList.append(fruits[indexPath.row]) if let cell = tableView.cellForRow(at: indexPath) { cell.accessoryType = .checkmark } } func tableView(_ tableView: UITableView, didDeselectRowAt indexPath: IndexPath) { if let index = newFruitList.index(of: fruits[indexPath.row]) { newFruitList.remove(at: index) } if let cell = tableView.cellForRow(at: indexPath) { cell.accessoryType = .none } } @IBAction func bookButtonPressed(_ sender: UIButton) { //testing purposes for i in stride(from: 0, to: newFruitList.count, by: 1){ print(newFruitList[i]) } } ```<issue_comment>username_1: This my approach for this problem: 1. I add a column `status` with type `string` in the `orders` table. 2. Define the constant of all your statuses in your class so you can reference them easily. 3. Make a validation rule on creation of order that the status value is in the only allowed ones you defines earlier. This makes adding a new status very easily by just editing your code base, and the retrieved value for the status is still a string (descriptive). I hope this answer your question. Upvotes: 1 <issue_comment>username_2: I suggest this: 1. Store in DB as status(unsigned tinyint, char(5)). 2. Id must be powers of 2: 1,2,4,8,... 3. At backend code const name must be humanized, but value -- int: `const PAID = 2` 4. At backend you should not use consts directly, but use status class object, which will contain some methods like `value` and `name`. 5. This class's test will check that all of it's values are in DB and all DB's values are covered by class. > > space for human errors > > > Tests invented to avoid human errors. Statuses are usually not so complex and has not so many values to mess with them. Enum is evil. <http://komlenic.com/244/8-reasons-why-mysqls-enum-data-type-is-evil/> Regarding your proposal: > > The database is well defined and normalized > > > No. It's denormalized. > > The returned data from the API is descriptive and deliver the required meaning. > > > You always can use wrapper, that goes into status table to get human name. > > The status constants used already contain their meaning which reduces the chances of errors. > > > Const name are for humans and values are for Benders. > > Adding a new status constant is simple with INSERT command in the status table. > > > Same in 1st and my solution. Upvotes: 0
2018/03/14
632
2,417
<issue_start>username_0: I am frustrated about the right config of prettier to achieve normal style for jsx tags. I want this : ``` const template = ( Hello world! ============= This is some info ); ``` and I got this: ``` const template = ( Hello world! ============= This is some info {' '} ); ``` My .eslintrc is: ``` { "parser": "babel-eslint", "extends": [ "airbnb", "prettier", "prettier/react" ], "env": { "browser": true }, "plugins": [ "react", "jsx-a11y", "import", "prettier" ], "rules": { "no-console": "off", "linebreak-style": "off", "react/jsx-filename-extension": [ 1, { "extensions": [".js", ".jsx"] } ], "react/react-in-jsx-scope": 0, "prettier/prettier": [ "error", { "trailingComma": "es5", "singleQuote": true, "printWidth": 120 } ] } } ``` I've been searching about this problem and I couldn't figure out what to do. And If anyone has a good .eslintrc file please post it.<issue_comment>username_1: I can't find anything wrong with your configuration. There are a few options but I don't really know your setup. I think there might be a configuration of Prettier in your IDE. So for example, if you use VSCode and have the Prettier extension installed you can override some of these settings. Second thing is that I think the {' '} is because of a trailing space in your example. There really is no way for me to check or reproduce. I'd suggest installing an addon which removes trailing spaces. I use Trailing Spaces for VSCode. The last thing you can check is by installing `eslint-config-react-app` and doing something like this: ``` "eslintConfig": { "extends": "react-app" }, ``` My rootnode is of course from my `package.json` and should be different for your `. eslintrc` but [extending](https://eslint.org/docs/user-guide/configuring#extending-configuration-files) should be the same. These are just a few things I can think of, hope they help and hope you get your linting solved, those issues also bug me a lot! Upvotes: 2 [selected_answer]<issue_comment>username_2: I was having the same problem, solved by removing Beatify extension. Upvotes: 0
2018/03/14
987
3,457
<issue_start>username_0: I'm using the code by Arg0n from [add class if date is today](https://stackoverflow.com/questions/33935641/add-class-if-date-is-today) which works great! I was wondered if it's possible to add only one class, so if attribute data-date equals today add class *.active*, else add class to the previous date (next available data-date). So there will always be one element highlighted. An added complication is the iframe which loads the active date's html file. [A visual diagram of the process may make it clearer](https://i.stack.imgur.com/ypfmM.png) Hopefully any solutions will help someone else out there too. Any help appreciated. ```js $(document).ready(function(){ // Add class="active" to calendar archive var currentDate = Date.parse((new Date()).toLocaleDateString()); $('.enewsarchive a').each(function(){ var specifiedDate = $(this).data('date'); var yr = $(this).data('date').substr(0,4); var mth = $(this).data('date').substr(5,2); var enewsurl = 'http://www.example.com/images/emails/' + yr + '/' + mth + '/' + specifiedDate + '.html'; var tdate = Date.parse(specifiedDate); if (!isNaN(tdate) && tdate == currentDate){ $(this).addClass('active'); // today $('#enews').attr('src',enewsurl); // change current iframe } else if (!isNaN(tdate) && currentDate - tdate > 0){ $(this).addClass(''); // past dates $('#enews').attr('src',enewsurl); } else { $(this).addClass(''); // future } }); // Load iframe with archives $('.enewsarchive a').click(function(e){ e.preventDefault(); $('#enews').attr('src',$(this).attr('href')); $('.enewsarchive a.active').removeClass('active'); $(this).addClass('active'); }); }); ``` ```css .enewsarchivepost{float:left} .enewsarchive .active .enewsarchivecalendar{background:#c00;color:#fff} .enewsarchivecalendar{padding:10px;width:100px;background:#eee} ``` ```html [01*APR*](http://www.example.com/enews/2018/04/2018-03-01.html) [14*MAR*](http://www.example.com/enews/2018/03/2018-03-14.html) [08*MAR*](http://www.example.com/enews/2018/03/2018-03-08.html) ```<issue_comment>username_1: I can't find anything wrong with your configuration. There are a few options but I don't really know your setup. I think there might be a configuration of Prettier in your IDE. So for example, if you use VSCode and have the Prettier extension installed you can override some of these settings. Second thing is that I think the {' '} is because of a trailing space in your example. There really is no way for me to check or reproduce. I'd suggest installing an addon which removes trailing spaces. I use Trailing Spaces for VSCode. The last thing you can check is by installing `eslint-config-react-app` and doing something like this: ``` "eslintConfig": { "extends": "react-app" }, ``` My rootnode is of course from my `package.json` and should be different for your `. eslintrc` but [extending](https://eslint.org/docs/user-guide/configuring#extending-configuration-files) should be the same. These are just a few things I can think of, hope they help and hope you get your linting solved, those issues also bug me a lot! Upvotes: 2 [selected_answer]<issue_comment>username_2: I was having the same problem, solved by removing Beatify extension. Upvotes: 0
2018/03/14
812
2,617
<issue_start>username_0: I'm trying to find the **nth digit** of an integer **(from right to left)**. I'm new to programming but have been using this site a lot for reference - up until now I've resisted passing my problems on but I cannot understand this one in the least, even after hours of effort. This is the code I have so far but for **FindDigit(int 5673, int 4)** it gives **53** instead of 5, **FindDigit(int 5673, int 3)** gives **51** instead of 6 ``` public class DigitFinder { public static int FindDigit(int num, int nth) { num = Math.Abs(num); string answer = Convert.ToString(num); int i = answer.Length; return ans[i-nth]; } } ``` I cannot understand at all why it returns a 2 digit number. Any guidance at all appreciated!<issue_comment>username_1: 53 is the ASCII code of the **character** `5`. Just subtract the character `0`, i.e. numeric 48. However, it is *usually* a good idea to avoid string manipulation for things like this; *if possible* you should probably prefer division/remainder (modulo) arithmetic. Upvotes: 2 <issue_comment>username_2: Just because no one else did, and also because i have *Printable Character OCD* ``` public static int GetLeastSignificantDigit(int number, int digit) { for (var i = 0; i < digit - 1; i++) number /= 10; return number % 10; } ``` [Demo here](https://dotnetfiddle.net/vJflOO) Upvotes: 2 <issue_comment>username_3: I'd just use ``` int result = (num / (int)Math.Pow(10,nth-1)) % 10; ``` Where `num` is the number to get the nth digit from (counted right to left) and `nth` is the "index" of digits you want (again: counted from right to left). Mind that it is 1-based. That is "1" is the rightmost digit. "0" would be out of range. To explain the math: `(int)Math.Pow(10,nth-1)` takes your desired index and decreases it by 1, then takes that as the power of 10. So if you want the 3rd digit, that makes 10 to the power of two equals 100. BTW: the cast to int is necessary because Math.Pow works on double and returns double. But we want to keep on working in integer arithmetic. Dividing by the result of above equation "shifts" your number to the right, so your desired digit becomes the rightmost digit. Example: 1234, we want 3rd digit from right ("2") => 1234 / (10^(3-1))= 1234 / 100 = 12 You then "cut out" that rightmost digit by applying the "remainder" (modulo) operator with divisor 10. Example: 12 % 10 = [12 / 10 = 1, Remainder =] 2. Mind that I also would check `nth` to be > 0 and `num` >= 10 ^ (nth-1). (never trust user input) Upvotes: 4 [selected_answer]
2018/03/14
887
2,809
<issue_start>username_0: Environment: * Ubuntu 16.04 * Angular CLI 1.7.3 I'm getting the error by executing **ng generate component dashboard**, but it also happens with **ng generate c** ``` $ ng generate component dashboard The "c" alias is already in use by the "--collection" option and cannot be used by the "--change-detection" option. Please use a different alias. ``` I tried to look for an error in npm, ant I got the following error ``` $npm list ... npm ERR! peer dep missing: @angular-devkit/core@0.4.5, required by @schematics/angular@0.4.5 npm ERR! peer dep missing: @angular-devkit/schematics@0.4.5, required by @schematics/angular@0.4.5 ``` It looks something related with npm validate alias **[function angular-cli.command.prototype.validateAlias (option, alias)](https://npmdoc.github.io/node-npmdoc-angular-cli/build/apidoc.html#apidoc.element.angular-cli.command.prototype.validateAlias)** but I'm not sure why is taking "c" instead of "component".<issue_comment>username_1: 53 is the ASCII code of the **character** `5`. Just subtract the character `0`, i.e. numeric 48. However, it is *usually* a good idea to avoid string manipulation for things like this; *if possible* you should probably prefer division/remainder (modulo) arithmetic. Upvotes: 2 <issue_comment>username_2: Just because no one else did, and also because i have *Printable Character OCD* ``` public static int GetLeastSignificantDigit(int number, int digit) { for (var i = 0; i < digit - 1; i++) number /= 10; return number % 10; } ``` [Demo here](https://dotnetfiddle.net/vJflOO) Upvotes: 2 <issue_comment>username_3: I'd just use ``` int result = (num / (int)Math.Pow(10,nth-1)) % 10; ``` Where `num` is the number to get the nth digit from (counted right to left) and `nth` is the "index" of digits you want (again: counted from right to left). Mind that it is 1-based. That is "1" is the rightmost digit. "0" would be out of range. To explain the math: `(int)Math.Pow(10,nth-1)` takes your desired index and decreases it by 1, then takes that as the power of 10. So if you want the 3rd digit, that makes 10 to the power of two equals 100. BTW: the cast to int is necessary because Math.Pow works on double and returns double. But we want to keep on working in integer arithmetic. Dividing by the result of above equation "shifts" your number to the right, so your desired digit becomes the rightmost digit. Example: 1234, we want 3rd digit from right ("2") => 1234 / (10^(3-1))= 1234 / 100 = 12 You then "cut out" that rightmost digit by applying the "remainder" (modulo) operator with divisor 10. Example: 12 % 10 = [12 / 10 = 1, Remainder =] 2. Mind that I also would check `nth` to be > 0 and `num` >= 10 ^ (nth-1). (never trust user input) Upvotes: 4 [selected_answer]
2018/03/14
1,038
4,273
<issue_start>username_0: I have problem with my Angular. I have this functions: ``` private callUserInfo(): any { this.isLoading = true; return this._ajaxService.getService('/system/ping') .map( result => { this.userId = result.participant.substring(result.participant.indexOf('#')); this.isLoading = false; } ) .catch(error => { return Observable.throw(error); }); } public loadUserData(userName: string): any { this.isLoading = true; return this._ajaxService.getService('/User/' + userName) .map( result => { const data = result[0]; this.user = new User( data.id, data.contacts[0].email, data.name, data.surname, data.address.street, data.address.city, data.address.state, data.address.country, data.address.postCode, data.address.timeZone); this.isLoading = false; }) .catch(error => { return Observable.throw(error); }); } public getUser(): any { if (this.user == null) { this.callUserInfo().subscribe(() => { this.loadUserData(this.userId).subscribe(() => { return this.user; }); }); } else { return this.user; } } ``` In my component I call this service functions like this (auth service is service with functions defined up): ``` constructor(private _auth: AuthService) { this.user = _auth.getUser(); } ``` But it stills return null (because Ajax calls are not finished?) Can someone explain me, how to call this two calls (first is system/ping service and based on return (userId) I need to call second ajax call (/user/id). After this two calls I have defined user in my service and I can return it to other components. Can someone expllain me, what am i doing wrong, or how I can do it better? I´m using newest version of angular. P.S. Get service is from my wrapper service: ``` getService(url: string): Observable { return this.http .get(this.base + url, this.options) .map(this.extractData) .catch(this.handleError); } ```<issue_comment>username_1: You need to call the second service in the `subscribe` or in the `map` method i.e. the `Observable` has returned a promise and that is resolved. Once that is resolved u should call your chained service. A sample snipped from my POC might help you ``` this._accountListService.getAccountsFromBE().subscribe( response => { this.response = response; this._accountListService.getAccountSorting().subscribe( response => { this.acctSort = response; if (response.prodCode) { this._accountListService.getAccountOrder().subscribe( response => { this.acctOrder = response; this.response = this.setAccountOrder(this.response); this.response.sort(this.myComparator); this.acctFlag = true; if (this.prodDesc) { this.loader = false; this.accountDetl = this.response[0]; this.accountDetl.entCdeDesc = this.prodDesc[this.accountDetl.entProdCatCde]; } }, err => console.log(err) ); } }, err => console.log(err) ); }, err => console.log(err) ); ``` Upvotes: 0 <issue_comment>username_2: You are not returning anything in case `this.user==null` Change your function as following: ``` userObservabel=new BehaviourSubject(null); public getUser(): any { if (this.user == null) { this.callUserInfo().subscribe(() => { this.loadUserData(this.userId).subscribe(() => { this.userObservabel.next(this.user); }); }); return this.userObservabel.asObservable(); } else { return this.userObservabel.asObservable(); } } ``` and then you need to subscribe it ``` constructor(private _auth: AuthService) { _auth.getUser().subscribe(user => this.user = user); } ``` Upvotes: 2 [selected_answer]
2018/03/14
387
1,837
<issue_start>username_0: I have learned statistics including mean, median, mode and different tests being Z test, F test and chi-square and all but generally participating in difficult numeric data prediction challenges like on kaggle and other platforms I hardly see anyone using statistical tests like z, f, chi-square, normalization of data these - all we use boxplots, bar plots to see mean, median, mode etc. my question is where these tests are an integral part in data science, for what sort of problems are these mainly designed - research based. What portion of statistics should ideally be used in a data science problem and why only some portion is used when all of statistics is must for data science. I am asking regarding tests and other statistics except the algorithms.<issue_comment>username_1: You're most likely to see statistical hypothesis testing in data science if you're looking at something like A/B testing, where your goal is to determine whether there is a reliable difference between two samples and the size of that difference. Kaggle competitions specifically are supervised learning problems rather than hypothesis testing, which is why you don't see people using things like chi-squared. (Which makes sense: if you have ten people do hypothesis testing on the same dataset, they should all get pretty much the same answer, which would make for a pretty uninteresting competition.) Personally, I think it's good to be familiar with both statistical hypothesis testing and machine-learning techniques, since they have different uses. Hope that helps! :) Upvotes: 2 [selected_answer]<issue_comment>username_2: Every problem in data science requires a different approach, so a generic statistics might not apply. There will be problems where some statistics might not be needed Upvotes: 0
2018/03/14
3,141
13,225
<issue_start>username_0: My current Android application allows users to search for content remotely. e.g. The user is presented with an `EditText` which accepts their search strings and triggers a remote API call that returns results that match the entered text. Worse case is that I simply add a `TextWatcher` and trigger an API call each time `onTextChanged` is called. This could be improved by forcing the user to enter at least N characters to search for before making the first API call. The "Perfect" solution would have the following features:- Once the user starts entering search string(s) Periodically (every M milliseconds) consume the entire string(s) entered. Trigger an API call each time the period expires and the current user input is different to the previous user input. [Is it possible to have a dynamic timeout related to the entered texts length? e.g while the text is "short" the API response size will be large and take longer to return and parse; As the search text gets longer the API response size will reduce along with "inflight" and parsing time] When the user restarts typing into the EditText field restart the Periodic consumption of text. Whenever the user presses the ENTER key trigger "final" API call, and stop monitoring user input into the EditText field. Set a minimum length of text the user has to enter before an API call is triggered but combine this minimum length with an overriding Timeout value so that when the user wishes to search for a "short" text string they can. I am sure that RxJava and or RxBindings can support the above requirements however so far I have failed to realise a workable solution. My attempts include ``` private PublishSubject publishSubject; publishSubject = PublishSubject.create(); publishSubject.filter(text -> text.length() > 2) .debounce(300, TimeUnit.MILLISECONDS) .toFlowable(BackpressureStrategy.LATEST) .subscribe(new Consumer() { @Override public void accept(final String s) throws Exception { Log.d(TAG, "accept() called with: s = [" + s + "]"); } }); mEditText.addTextChangedListener(new TextWatcher() { @Override public void beforeTextChanged(final CharSequence s, final int start, final int count, final int after) { } @Override public void onTextChanged(final CharSequence s, final int start, final int before, final int count) { publishSubject.onNext(s.toString()); } @Override public void afterTextChanged(final Editable s) { } }); ``` And this with RxBinding ``` RxTextView.textChanges(mEditText) .debounce(500, TimeUnit.MILLISECONDS) .subscribe(new Consumer(){ @Override public void accept(final CharSequence charSequence) throws Exception { Log.d(TAG, "accept() called with: charSequence = [" + charSequence + "]"); } }); ``` Neither of which give me a conditional filter that combines entered text length and a Timeout value. I've also replaced debounce with throttleLast and sample neither of which furnished the required solution. Is it possible to achieve my required functionality? **DYNAMIC TIMEOUT** An acceptable solution would cope with the following three scenarios i). The user wishes to search for the any word beginning with "P" ii). The user wishes to search for any word beginning with "Pneumo" iii). The user wishes to search for the word "Pneumonoultramicroscopicsilicovolcanoconiosis" In all three scenarios as soon as the user types the letter "P" I will display a progress spinner (however no API call will be executed at this point). I would like to balance the need to give the user search feedback within a responsive UI against making "wasted" API calls over the network. If I could rely on the user entering their search text then clicking the "Done" (or "Enter") key I could initiate the final API call immediately. *Scenario One* As the text entered by the user is short in length (e.g. 1 character long) My timeout value will be at its maximum value, This gives the user the opportunity to enter additional characters and saves "wasted API calls". As the user wishes to search for the letter "P" alone, once the Max Timeout expires I will execute the API call and display the results. This scenario gives the user the worst user experience as they have to wait for my Dynamic Timeout to expire and then wait for a Large API response to be returned and displayed. They will not see any intermediary search results. *Scenario Two* This scenario combines scenario one as I have no idea what the user is going to search for (or the search strings final length) if they type all 6 characters "quickly" I can execute one API call, however the slower they are entering the 6 characters will increase the chance of executing wasted API calls. This scenario gives the user an improved user experience as they have to wait for my Dynamic Timeout to expire however they do have a chance of seeing intermediary search results. The API responses will be smaller than scenario one. *Scenario Three* This scenario combines scenario one and two as I have no idea what the user is going to search for (or the search strings final length) if they type all 45 characters "quickly" I can execute one API call (maybe!), however the slower they type the 45 characters will increase the chance of executing wasted API calls. I'am not tied to any technology that delivers my desired solution. I believe Rx is the best approach I've identified so far.<issue_comment>username_1: You might find what you need in the [`as`](http://reactivex.io/RxJava/javadoc/io/reactivex/Observable.html#as-io.reactivex.ObservableConverter-) operator. It takes an `ObservableConverter` which allows you to convert your source `Observable` into an arbitrary object. That object can be another `Observable` with arbitrarily complex behavior. ``` public class MyConverter implements ObservableConverter> { Observable apply(Observable upstream) { final PublishSubject downstream = PublishSubject.create(); // subscribe to upstream // subscriber publishes to downstream according to your rules return downstream; } } ``` Then use it like this: ``` someObservableOfFoo.as(new MyConverter())... // more operators ``` **Edit:** I think [`compose`](http://reactivex.io/RxJava/javadoc/) may be more paradigmatic. It's a less powerful version of `as` specifically for producing an `Observable` instead of any object. Usage is essentially the same. See this [tutorial](http://blog.danlew.net/2015/03/02/dont-break-the-chain/). Upvotes: 0 <issue_comment>username_2: Something like this should work (didn't really try it) ``` Single firstTypeOnlyStream = RxTextView.textChanges(mEditText) .skipInitialValue() .map(CharSequence::toString) .firstOrError(); Observable restartTypingStream = RxTextView.textChanges(mEditText) .filter(charSequence -> charSequence.length() == 0); Single latestTextStream = RxTextView.textChanges(mEditText) .map(CharSequence::toString) .firstOrError(); Observable enterStream = RxTextView.editorActionEvents(mEditText, actionEvent -> actionEvent.actionId() == EditorInfo.IME\_ACTION\_DONE); firstTypeOnlyStream .flatMapObservable(\_\_ -> latestTextStream .toObservable() .doOnNext(text -> nextDelay = delayByLength(text.length())) .repeatWhen(objectObservable -> objectObservable .flatMap(o -> Observable.timer(nextDelay, TimeUnit.MILLISECONDS))) .distinctUntilChanged() .flatMap(text -> { if (text.length() > MINIMUM\_TEXT\_LENGTH) { return apiRequest(text); } else { return Observable.empty(); } }) ) .takeUntil(restartTypingStream) .repeat() .takeUntil(enterStream) .mergeWith(enterStream.flatMap(\_\_ -> latestTextStream.flatMapObservable(this::apiRequest) )) .subscribe(requestResult -> { //do your thing with each request result }); ``` The idea is to construct the stream based on sampling rather then the text changed events itself, based on your requirement to sample each X time. The way I did it here, is to construct one stream (`firstTypeOnlyStream` for the initial triggering of the events (the first time user input text), this stream will start the entire processing stream with the first typing of the user, next, when this first trigger arrives, we will basically sample the edit text periodically using the `latestTextStream`. `latestTextStream` is not really a stream over time, but rather a sampling of the current state of the `EditText` using the `InitialValueObservable` property of RxBinding (it simply emits on subscription the current text on the `EditText`) in other words it's a fancy way to get current text on subscription, and it's equivalent to: `Observable.fromCallable(() -> mEditText.getText().toString());` next, for dynamic timeout/delay, we update the `nextDelay` based on the text length and using `repeatWhen` with timer to wait for the desired time. together with `distinctUntilChanged`, it should give the desired sampling based on text length. further on, we'll fire the request based on the text (if long enough). **Stop by Enter** - use `takeUntil` with `enterStream` which will be triggered on Enter and it also will trigger the final query. **Restarting** - when the user 'restarts' typing - i.e. text is empty, `.takeUntil(restartTypingStream)` + `repeat()` will stop the stream when empty string enter, and restarts it (resubscribe). Upvotes: 4 [selected_answer]<issue_comment>username_3: Well, you could use something like this: ``` RxSearch.fromSearchView(searchView) .debounce(300, TimeUnit.MILLISECONDS) .filter(item -> item.length() > 1) .observeOn(AndroidSchedulers.mainThread()) .subscribe(query -> { adapter.setNamesList(namesAPI.searchForName(query)); adapter.notifyDataSetChanged(); apiCallsTextView.setText("API CALLS: " + apiCalls++); }); public class RxSearch { public static Observable fromSearchView(@NonNull final SearchView searchView) { final BehaviorSubject subject = BehaviorSubject.create(""); searchView.setOnQueryTextListener(new SearchView.OnQueryTextListener() { @Override public boolean onQueryTextSubmit(String query) { subject.onCompleted(); return true; } @Override public boolean onQueryTextChange(String newText) { if (!newText.isEmpty()) { subject.onNext(newText); } return true; } }); return subject; } } ``` [blog referencia](https://viblo.asia/p/using-rxjava-in-searchview-android-gDVK2kavZLj) Upvotes: 1 <issue_comment>username_4: your query can be easily solved by using RxJava2 methods, before i post code i will add the steps of what i am doing. 1. add an PublishSubject that will take your inputs and add a filter to it which will check if the input is greater than two or not. 2. add debounce method so that all input events that are fired before 300ms are ignored and the final query which is fired after 300ms is taken into consideration. 3. now add a switchmap and add your network request event into it, 4. Subscribe you event. The code is as follows : ``` subject = PublishSubject.create(); //add this inside your oncreate getCompositeDisposable().add(subject .doOnEach(stringNotification -> { if(stringNotification.getValue().length() < 3) { getMvpView().hideEditLoading(); getMvpView().onFieldError("minimum 3 characters required"); } }) .debounce(300, TimeUnit.MILLISECONDS) .filter(s -> s.length() >= 3) .switchMap(s -> getDataManager().getHosts( getDataManager().getDeviceToken(), s).subscribeOn(Schedulers.io())) .observeOn(AndroidSchedulers.mainThread()) .subscribe(hostResponses -> { getMvpView().hideEditLoading(); if (hostResponses.size() != 0) { if (this.hostResponses != null) this.hostResponses.clear(); this.hostResponses = hostResponses; getMvpView().setHostView(getHosts(hostResponses)); } else { getMvpView().onFieldError("No host found"); } }, throwable -> { getMvpView().hideEditLoading(); if (throwable instanceof HttpException) { HttpException exception = (HttpException) throwable; if (exception.code() == 401) { getMvpView().onError(R.string.code_expired, BaseUtils.TOKEN_EXPIRY_TAG); } } }) ); ``` this will be your textwatcher: ``` searchView.addTextChangedListener(new TextWatcher() { @Override public void beforeTextChanged(CharSequence charSequence, int i, int i1, int i2) { } @Override public void onTextChanged(CharSequence charSequence, int i, int i1, int i2) { subject.onNext(charSequence.toString()); } @Override public void afterTextChanged(Editable editable) { } }); ``` P.S. This is working for me!! Upvotes: 1
2018/03/14
449
1,698
<issue_start>username_0: How to delete aws ECR repository which contain images through cloudformation? getting below error while deleting it. **The repository with name 'test' in registry with id '\*\*\*\*\*\*\*\*\*\*' cannot be deleted because it still contains images**<issue_comment>username_1: I was able to do this by first deleting all images in ECR and then going back to CloudFormation and deleting again. Instructions for deleting images are here: <https://docs.aws.amazon.com/AmazonECR/latest/userguide/delete_image.html>. After I did that, I was able to head back to CloudFormation and delete with no problems. Upvotes: 1 <issue_comment>username_2: Leaving my approach to solve this using Python's boto3 client. (1) Empty repository and then (2) delete stack. ```py import boto3 ecr_client = boto3.client('ecr') ... # Apply only to ecr cfn template if '-ecr' in stack_name: print('Deleting existing images...') image_ids = ecr_client.list_images(repositoryName=ECR_REPO_NAME)['imageIds'] ecr_client.batch_delete_image( repositoryName=ECR_REPO_NAME, imageIds=image_ids ) print('ECR repository is now empty.') # Now delete stack containing ECR repository delete_stack(**cf_template) ``` Upvotes: 2 <issue_comment>username_3: There doesn't seem to be a way to do it all via CloudFormation, but you can do it with a single cli command, instead of resorting to python or multiple image delete commands. ``` aws ecr delete-repository \ --repository-name \ --force ``` The --force flag will cause the deletion of the images also: ``` --force | --no-force (boolean) If a repository contains images, forces the deletion. ``` Upvotes: 1
2018/03/14
576
2,128
<issue_start>username_0: I have a `collectionviewController` and a `containerView` that contains a `text field` and `button` that on click sends a message. The `containerView` is pinned to bottom of my `view` with a fixed height. The `collectionViewCell` is going under the `containerView`. Like this [My simulator screen how my view is right now](https://i.stack.imgur.com/vyVvl.png) I want the view in way that `collectionView`'s bottom always Stays on top the `containerview`. Like this [My simulator screen how my view needs to be](https://i.stack.imgur.com/fDDD0.png) I also tried setting `collectionView`'s bottom constraint to top constraint of the `containerView`. Thanks in advance.<issue_comment>username_1: I was able to do this by first deleting all images in ECR and then going back to CloudFormation and deleting again. Instructions for deleting images are here: <https://docs.aws.amazon.com/AmazonECR/latest/userguide/delete_image.html>. After I did that, I was able to head back to CloudFormation and delete with no problems. Upvotes: 1 <issue_comment>username_2: Leaving my approach to solve this using Python's boto3 client. (1) Empty repository and then (2) delete stack. ```py import boto3 ecr_client = boto3.client('ecr') ... # Apply only to ecr cfn template if '-ecr' in stack_name: print('Deleting existing images...') image_ids = ecr_client.list_images(repositoryName=ECR_REPO_NAME)['imageIds'] ecr_client.batch_delete_image( repositoryName=ECR_REPO_NAME, imageIds=image_ids ) print('ECR repository is now empty.') # Now delete stack containing ECR repository delete_stack(**cf_template) ``` Upvotes: 2 <issue_comment>username_3: There doesn't seem to be a way to do it all via CloudFormation, but you can do it with a single cli command, instead of resorting to python or multiple image delete commands. ``` aws ecr delete-repository \ --repository-name \ --force ``` The --force flag will cause the deletion of the images also: ``` --force | --no-force (boolean) If a repository contains images, forces the deletion. ``` Upvotes: 1
2018/03/14
2,988
10,563
<issue_start>username_0: I am trying to gather some bootstrapped estimates for summary statistics from a dataset, but I want to resample parts of the dataset at different rates, which has led me to lean on nested for loops. Specifically, suppose there are two groups in my dataset, and each group is further divided into test and control. Group 1 has a 75% / 25% test-control ratio, and Group 2 has a 50% / 50% test-control ratio. I want to resample such that the dataset is the same size, but the test-control ratios are 90% / 10% for both groups... in other words, resample different subgroups at different rates, which strikes me as different from what the `boot` package normally does. In my dataset, I created a `group` variable representing the groups, and a `groupT` variable representing group concatenated with test/control, e.g.: ``` id group groupT 1 1 1T 2 1 1T 3 2 2T 4 1 1C 5 2 2C ``` Here's what I am running right now, with `nreps` arbitrarily set to be my number of bootstrap replications: ``` for (j in 1:nreps){ bootdat <- datafile[-(1:nrow(datafile)),] ## initialize empty dataset for (i in unique(datafile$groups)){ tstring<-paste0(i,"T") ## e.g. 1T cstring<-paste0(i,"C") ## e.g. 1C ## Size of test group resample should be ~90% of total group size tsize<-round(.90*length(which(datafile$groups==i)),0) ## Size of control group resample should be total group size minus test group size csize<-length(which(datafile$groups==i))-tsize ## Continue building bootdat by rbinding the test and control resample ## before moving on to the next group ## Note the use of datafile$groupT==tstring to ensure I'm only sampling from test, etc. bootdat<-rbind(bootdat,datafile[sample(which(datafile$groupT==tstring),size=tsize, replace=TRUE),]) bootdat<-rbind(bootdat,datafile[sample(which(datafile$groupT==cstring),size=csize, replace=TRUE),]) } ## Here, there is code to grab some summary statistics from bootdat ## and store them in statVector[j] before moving on to the next replication } ``` With a dataset size of about 1 million total records, this takes 3-4 minutes per replication. I feel certain there is a better way to do this either with `sapply` or possibly some of the *dplyr* functions, but I have come up empty in my attempts so far. Any help would be appreciated!<issue_comment>username_1: I'd strongly encourage you to look into data.table and foreach, using keyed searches for bootstraps. It'll allow you to do a single bootstrap very rapidly, and you can run each bootstrap independently on a different core. Each bootstrap of the below takes 0.5 seconds on my machine, searching through a table of 1 million rows. Something like the following should get you started: ``` library(data.table) library(foreach) library(doMC) registerDoMC(cores=4) # example data dat <- data.table(id=1:1e6, group=sample(2, size=1e6, replace=TRUE), test_control=sample(c("T","C"), size=1e5, replace=TRUE)) # define number of bootstraps nBootstraps <- 1000 # define sampling fractions fraction_test <- 0.90 fraction_control <- 1 - fraction_test # get number that you want to sample from each group N.test <- round(fraction_test * dim(dat)[1]) N.control <- round(fraction_control * dim(dat)[1]) # key data by id setkey(dat, id) # get ID values for each combination, to be used for keyed search during bootstrapping group1_test_ids <- dat[group==1 & test_control=="T"]$id group1_control_ids <- dat[group==1 & test_control=="C"]$id group2_test_ids <- dat[group==2 & test_control=="T"]$id group2_control_ids <- dat[group==2 & test_control=="C"]$id results <- foreach(n = 1:nBootstraps, .combine="rbind", .inorder=FALSE) %dopar% { # sample each group with the defined sizes, with replacement g1T <- dat[.(sample(group1_test_ids, size=N.test, replace=TRUE))] g1C <- dat[.(sample(group1_control_ids, size=N.control, replace=TRUE))] g2T <- dat[.(sample(group2_test_ids, size=N.test, replace=TRUE))] g2C <- dat[.(sample(group2_control_ids, size=N.control, replace=TRUE))] dat.all <- rbindlist(list(g1T, g1C, g2T, g2C)) dat.all[, bootstrap := n] # do summary stats here with dat.all, return the summary stats data.table object return(dat.summarized) } ``` EDIT: example below includes a lookup table for each of any arbitrary number of unique groups. The IDs corresponding to each combination of group + (test OR control) can be referenced within a foreach loop for simplicity. With lower numbers for N.test and N.control (900 and 100) it spits out the results of 1000 bootstraps in ``` library(data.table) library(foreach) # example data dat <- data.table(id=1:1e6, group=sample(24, size=1e6, replace=TRUE), test_control=sample(c("T","C"), size=1e5, replace=TRUE)) # save vector of all group values & change group to character vector for hashed environment lookup all_groups <- as.character(sort(unique(dat$group))) dat[, group := as.character(group)] # define number of bootstraps nBootstraps <- 100 # get number that you want to sample from each group N.test <- 900 N.control <- 100 # key data by id setkey(dat, id) # all values for group # Set up lookup table for every combination of group + test/control control.ids <- new.env() test.ids <- new.env() for(i in all_groups) { control.ids[[i]] <- dat[group==i & test_control=="C"]$id test.ids[[i]] <- dat[group==i & test_control=="T"]$id } results <- foreach(n = 1:nBootstraps, .combine="rbind", .inorder=FALSE) %do% { foreach(group.i = all_groups, .combine="rbind") %do% { # get IDs that correspond to this group, for both test and control control_id_vector <- control.ids[[group.i]] test_id_vector <- test.ids[[group.i]] # search and bind controls <- dat[.(sample(control_id_vector, size=N.control, replace=TRUE))] tests <- dat[.(sample(test_id_vector, size=N.test, replace=TRUE))] dat.group <- rbindlist(list(controls, tests)) dat.group[, bootstrap := n] return(dat.group[]) } # summarize across all groups for this bootstrap and return summary stat data.table object } ``` yielding ``` > results id group test_control bootstrap 1: 701570 1 C 1 2: 424018 1 C 1 3: 909932 1 C 1 4: 15354 1 C 1 5: 514882 1 C 1 --- 23999996: 898651 24 T 1000 23999997: 482374 24 T 1000 23999998: 845577 24 T 1000 23999999: 862359 24 T 1000 24000000: 602078 24 T 1000 ``` This doesn't involve any of the summary stat calculation time, but here 1000 bootstraps were pulled out on 1 core serially in ``` user system elapsed 62.574 1.267 63.844 ``` If you need to manually code N to be different for each group, you can do the same thing as with id lookup ``` # create environments control.Ns <- new.env() test.Ns <- new.env() # assign size values control.Ns[["1"]] <- 900 test.Ns[["1"]] <- 100 control.Ns[["2"]] <- 400 test.Ns[["2"]] <- 50 ... ... control.Ns[["24"]] <- 200 test.Ns[["24"]] <- 5 ``` then change the big bootstrap loop to look up these values based on the loop's current group: ``` results <- foreach(n = 1:nBootstraps, .combine="rbind", .inorder=FALSE) %do% { foreach(group.i = all_groups, .combine="rbind") %do% { # get IDs that correspond to this group, for both test and control control_id_vector <- control.ids[[group.i]] test_id_vector <- test.ids[[group.i]] # get size values N.control <- control.Ns[[group.i]] N.test <- test.Ns[[group.i]] # search and bind controls <- dat[.(sample(control_id_vector, size=N.control, replace=TRUE))] tests <- dat[.(sample(test_id_vector, size=N.test, replace=TRUE))] dat.group <- rbindlist(list(controls, tests)) dat.group[, bootstrap := n] return(dat.group[]) } # summarize across all groups for this bootstrap and return summary stat data.table object } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Just like [username_1](https://stackoverflow.com/users/7753364/username_1), I recommend taking a look at `data.table` it is usually very efficient in solving such problems, however if you want to choose to work with `dplyr` then you can try doing something like this: ``` summary_of_boot_data <- lapply(1:nreps, function(y){ # get bootdata bootdata <- lapply(unique(datafile$group), function(x){ tstring<-paste0(x,"T") cstring<-paste0(x,"C") tsize<-round(.90*length(which(datafile$group==x)),0) csize<-length(which(datafile$group==x))-tsize df <-rbind(datafile[sample(which(datafile$groupT==tstring), size=tsize, replace=TRUE),], datafile[sample(which(datafile$groupT==cstring), size=csize, replace=TRUE),]) return(df) }) %>% do.call(rbind, .) # return your summary thing for bootdata e.g. summary(bootdata) summary(bootdata) }) summary_of_boot_data ``` I tried not changing you code a lot, I just replaced the use of `for` with `lapply` hope this helps EDIT: Based on the comment from [Hugh](https://stackoverflow.com/users/1664978/hugh) you might want to try using `data.table::rbindlist()` instead of `do.call(rbind, .)` Upvotes: 1
2018/03/14
903
3,498
<issue_start>username_0: if you ever encountered a problem, when you cannot issue a sluggable behavior on a translated field, I feel ya. Whenever you save a translation for an entity, 'slug' property is omitted because it's not dirty in the time of saving the translation entity. 1. You save an entity. 2. Translations are being created. 3. The table for i18n has no sluggable behavior attached, so it does not know, when to issue a sluggable behavior on a translated field like title / name etc.<issue_comment>username_1: Solution I think of might be and it's tested: you specify a protected property in your entity, like: ``` protected $_sluggable = 'title'; ``` then you create a getter: ``` public function _getSluggableField() { return $this->_sluggable; } ``` as soon as you do that you need to update the vendor file: ``` vendor/cakephp/cakephp/src/ORM/Behavior/TranslateBehavior.php ``` and change: ``` foreach ($translations as $lang => $translation) { foreach ($fields as $field) { if (!$translation->isDirty($field)) { continue; } $find[] = ['locale' => $lang, 'field' => $field, 'foreign_key' => $key]; $contents[] = new Entity(['content' => $translation->get($field)], [ 'useSetters' => false ]); } } ``` to: ``` foreach ($translations as $lang => $translation) { foreach ($fields as $field) { if($field==='slug' && (method_exists($entity, '_getSluggableField') && $entity->_getSluggableField())) { $translation->set('slug', \Cake\Utility\Text::slug($translation->get($entity->_getSluggableField()))); } if (!$translation->isDirty($field)) { continue; } $find[] = ['locale' => $lang, 'field' => $field, 'foreign_key' => $key]; $contents[] = new Entity(['content' => $translation->get($field)], [ 'useSetters' => false ]); } } ``` I hope someone has a better solution. But this one works as a charm. Upvotes: -1 <issue_comment>username_2: You can create and use a concrete table class for the translation table, where you can then create the slugs. By default the name that the translate behavior uses for looking up table classes is `I18n`, so if you want this to apply to all translated tables, create `App\Model\Table\I18nTable`, or if you want this to apply to specific translated tables only, create a separate database translation table and class, and configure the translate behavior accordingly via the `translationTable` option: ``` // looks up `App\Model\Table\CustomI18nTable` 'translationTable' => 'CustomI18n' ``` See also * **[Cookbook > Database Access & ORM > Behaviors > Translate > Using a Separate Translations Table](https://book.cakephp.org/3.0/en/orm/behaviors/translate.html#using-a-separate-translations-table)** Upvotes: 0 <issue_comment>username_1: I think I've found a better solution: In my SluggableBehavior class, I've updated the behavior to include translations too: ``` public function beforeSave(Event $event, EntityInterface $entity) { $this->slug($entity); if($entity->get('_translations')) { foreach($entity->get('_translations') as $key=>$translation) { $this->slug($translation); } } } ``` Of course, simply as it can be, it does not need a separate table :-) But thanks @username_2. Upvotes: 2 [selected_answer]
2018/03/14
1,026
4,104
<issue_start>username_0: I'm building a rest service using jackson with a single instance of ObjectMapper where I can set my configuration. Java-side values are pojos with fields of types like String and int. Very simple, straightforward situation, nothing special. I want to perform some processing on every field of a given type after deserialization, possibly altering the value that should be put in the pojo field. I don't want to litter my pojos with annotations or anything, it should be self-contained within ObjectMapper. I also don't want to override the existing deserialization code - the data mapping itself should keep working as-is. Concrete example: say I want to call toUpperCase() on every incoming String because I dislike lower case letters. How can I create this behavior? I was hoping to find something like the following, but it doesn't seem to exist: `objectMapper.getDeserializationConfig().registerValueProcessor(Foo.class, Foo::bar);` I'm familiar with jackson basics like registering a new type (de)serializer, I just don't know anything for this particular type of thing.<issue_comment>username_1: Solution I think of might be and it's tested: you specify a protected property in your entity, like: ``` protected $_sluggable = 'title'; ``` then you create a getter: ``` public function _getSluggableField() { return $this->_sluggable; } ``` as soon as you do that you need to update the vendor file: ``` vendor/cakephp/cakephp/src/ORM/Behavior/TranslateBehavior.php ``` and change: ``` foreach ($translations as $lang => $translation) { foreach ($fields as $field) { if (!$translation->isDirty($field)) { continue; } $find[] = ['locale' => $lang, 'field' => $field, 'foreign_key' => $key]; $contents[] = new Entity(['content' => $translation->get($field)], [ 'useSetters' => false ]); } } ``` to: ``` foreach ($translations as $lang => $translation) { foreach ($fields as $field) { if($field==='slug' && (method_exists($entity, '_getSluggableField') && $entity->_getSluggableField())) { $translation->set('slug', \Cake\Utility\Text::slug($translation->get($entity->_getSluggableField()))); } if (!$translation->isDirty($field)) { continue; } $find[] = ['locale' => $lang, 'field' => $field, 'foreign_key' => $key]; $contents[] = new Entity(['content' => $translation->get($field)], [ 'useSetters' => false ]); } } ``` I hope someone has a better solution. But this one works as a charm. Upvotes: -1 <issue_comment>username_2: You can create and use a concrete table class for the translation table, where you can then create the slugs. By default the name that the translate behavior uses for looking up table classes is `I18n`, so if you want this to apply to all translated tables, create `App\Model\Table\I18nTable`, or if you want this to apply to specific translated tables only, create a separate database translation table and class, and configure the translate behavior accordingly via the `translationTable` option: ``` // looks up `App\Model\Table\CustomI18nTable` 'translationTable' => 'CustomI18n' ``` See also * **[Cookbook > Database Access & ORM > Behaviors > Translate > Using a Separate Translations Table](https://book.cakephp.org/3.0/en/orm/behaviors/translate.html#using-a-separate-translations-table)** Upvotes: 0 <issue_comment>username_1: I think I've found a better solution: In my SluggableBehavior class, I've updated the behavior to include translations too: ``` public function beforeSave(Event $event, EntityInterface $entity) { $this->slug($entity); if($entity->get('_translations')) { foreach($entity->get('_translations') as $key=>$translation) { $this->slug($translation); } } } ``` Of course, simply as it can be, it does not need a separate table :-) But thanks @username_2. Upvotes: 2 [selected_answer]
2018/03/14
973
3,088
<issue_start>username_0: I am trying to return a specific item from a Pandas DataFrame via conditional selection (and do not want to have to reference the index to do so). Here is an example: I have the following dataframe: ``` Code Colour Fruit 0 1 red apple 1 2 orange orange 2 3 yellow banana 3 4 green pear 4 5 blue blueberry ``` I enter the following code to search for the code for blueberries: ``` df[df['Fruit'] == 'blueberry']['Code'] ``` This returns: ``` 4 5 Name: Code, dtype: int64 ``` which is of type: ``` pandas.core.series.Series ``` but what I actually want to return is the number 5 of type: ``` numpy.int64 ``` which I can do if I enter the following code: ``` df[df['Fruit'] == 'blueberry']['Code'][4] ``` i.e. referencing the index to give the number 5, but I do not want to have to reference the index! Is there another syntax that I can deploy here to achieve the same thing? Thank you!... Update: One further idea is this code: ``` df[df['Fruit'] == 'blueberry']['Code'][df[df['Fruit']=='blueberry'].index[0]] ``` However, this does not seem particularly elegant (and it references the index). Is there a more concise and precise method that does not need to reference the index or is this strictly necessary? Thanks!...<issue_comment>username_1: Let's try this: ``` df.loc[df['Fruit'] == 'blueberry','Code'].values[0] ``` Output: ``` 5 ``` First, use `.loc` to access the values in your dataframe using the boolean indexing for row selection and index label for column selection. The convert that returned series to an array of values and since there is only one value in that array you can use index '[0]' get the scalar value from that single element array. Upvotes: 4 [selected_answer]<issue_comment>username_2: Referencing index is a requirement (unless you use `next()`^), since a `pd.Series` is not guaranteed to have one value. You can use `pd.Series.values` to extract the values as an array. This also works if you have multiple matches: ``` res = df.loc[df['Fruit'] == 'blueberry', 'Code'].values # array([5], dtype=int64) df2 = pd.concat([df]*5) res = df2.loc[df2['Fruit'] == 'blueberry', 'Code'].values # array([5, 5, 5, 5, 5], dtype=int64) ``` To get a list from the numpy array, you can use `.tolist()`: ``` res = df.loc[df['Fruit'] == 'blueberry', 'Code'].values.tolist() ``` Both the array and the list versions can be indexed intuitively, e.g. `res[0]` for the first item. ^ If you are *really* opposed to using index, you can use `next()` to iterate: ``` next(iter(res)) ``` Upvotes: 2 <issue_comment>username_3: you can also set your 'Fruit' column as ann index ``` df_fruit_index = df.set_index('Fruit') ``` and extract the value from the 'Code' column based on the fruit you choose ``` df_fruit_index.loc['blueberry','Code'] ``` Upvotes: 0 <issue_comment>username_4: Easiest solution: convert `pandas.core.series.Series` to integer! ``` my_code = int(df[df['Fruit'] == 'blueberry']['Code']) print(my_code) ``` Outputs: ```none 5 ``` Upvotes: 0
2018/03/14
361
1,089
<issue_start>username_0: My theme uses the following syntax. I will need to target the `span` after the icon class `fa-lock` ``` My item ``` I tried: ``` .shiftnav .fa-lock span:first-of-type {color: #c1c1c1;} ``` But that doesnt work for some reason. Any ideas please.<issue_comment>username_1: You can use `.shiftnav-icon.fa-lock + span` selector. `+` is a [`Adjacent sibling combinator`](https://developer.mozilla.org/en-US/docs/Web/CSS/Adjacent_sibling_selectors) and it will select `span` element that is right after `.shiftnav-icon.fa-lock` element ```css .shiftnav-icon.fa-lock + span {color: red;} ``` ```html My item ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this .shiftnav-icon.fa-lock + span:first-of-type {color: #c1c1c1;} Your 'shiftnav-icon' and 'fa-lock' are in the same tag - so there should be no gap between them when you use them. The 'span' is the next element - so use the '+' symbol to target it. Check out Adjacent sibling combinator here <https://developer.mozilla.org/en-US/docs/Web/CSS/Adjacent_sibling_selectors> Upvotes: 0
2018/03/14
659
2,270
<issue_start>username_0: I installed Qt 5.10 SDK on Windows 10. I thought that the HiDPI issues were fixed in Qt 5.6, but Qt Creator still seems to be "too big": [![enter image description here](https://i.stack.imgur.com/qVIZ9.png)](https://i.stack.imgur.com/qVIZ9.png) Am I missing something? My resolution is 3840x2160 with the "recommended" 150% scaling. Visual Studio in the background is of the correct size.<issue_comment>username_1: It probably has its own hidpi functionality, unlike the legacy windows stuff that's just a direct upscale, so it appears to look bigger on your display which is amplified by the scaling you have applied. From the information [here](http://doc.qt.io/qt-5/highdpi.html) it seems that you can go about to either set a custom scale factor or a custom DPI awareness scheme. You can set those as system environment variables or use some cmd basic scripting to set them at a per application level: ``` @echo off set QT_SCALE_FACTOR=1 qtcreator.exe ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: I guess the default `HighDpiScaleFactorRoundingPolicy` of QtCreator is `Round`. so you can only scale to 1 or 2 not 1.5. The correct solution is set the environment variable: ```sh export QT_SCALE_FACTOR_ROUNDING_POLICY=PassThrough ./qtcreator.exe ``` Upvotes: 4 <issue_comment>username_3: I had the same problem with the nextcloud-client on Windows 10 and a scaling of 150%, solved by (a) setting a global enivronmental variable in Windows 10 called *QT\_SCALE\_FACTOR\_ROUNDING\_POLICY* with the value *PassThrough*: [Windows-dialogue for setting variables](https://i.stack.imgur.com/k0dEW.png) You can do so by hitting the Windows-key and search for *variables*... or (b) a batch-file containing: ``` @echo off set QT_SCALE_FACTOR_ROUNDING_POLICY=PassThrough start C:\path\to\nextcloud.exe exit ``` and then starting that batch-file. *(Of course you have to adapt the path to your actual nextcloud.exe location.)* [If you want this to be run on startup, hit the keys windows and r , enter shell:startup and leave a link to your batchfile in the now opened folder. (Deactivate "run at startup" inside the Nextcloud-App to be sure the batchfile really runs.)] (a seems to be more convinient for me.) Upvotes: 2
2018/03/14
604
2,150
<issue_start>username_0: I am trying to change the value of input type submit onclick. Please let me know where I'm going wrong. ```js var elem = document.getElementsByClassName("main-add-to-cart"); $('.main-add-to-cart').on('click', function() { if (elem.value == "Add to cart") { elem.value = "New Changed text"; } }); ``` ```html ``` Thank you.<issue_comment>username_1: The main problem with your code is that [`getElementsByClassName`](https://developer.mozilla.org/en-US/docs/Web/API/Document/getElementsByClassName) returns an array-like structure, and not a single value. Assuming that there's only one element, you can correct your code like the following to make it work: ```js var elem = document.getElementsByClassName("main-add-to-cart")[0]; $('.main-add-to-cart').on('click', function() { if (elem.value == "Add to cart") { elem.value = "New Changed text"; } }); ``` ```html ``` Notice that I've used `[0]` to get the first element from the selector's result. If you've more than one elements, you need to loop through the result and check your condition(s). Upvotes: 0 <issue_comment>username_2: Your DOM selection using `getElementsByClassName` returns a collection, so it doesn't have a `.value` property. You could instead select the first element with that class using `.querySelector()`. ``` var elem = document.querySelector(".main-add-to-cart"); ``` --- However, the bound element is available in the handler via `this`, so you don't really need the variable. ``` if (this.value == "Add to cart") { this.value = "New Changed text"; } ``` --- And in general, here's a non-jQuery solution. ```js var elem = document.querySelector(".main-add-to-cart"); elem.addEventListener('click', function() { if (this.value == "Add to cart") { this.value = "New Changed text"; } }); ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: You should try this it will work. ``` $(document).ready(function(){ $('#edit-submit--3346').click(function(){ if($(this).val() == 'Add to cart') $(this).val('New Changed text'); $('#form').submit(); }) ``` Upvotes: 0
2018/03/14
832
1,891
<issue_start>username_0: I am trying to concat two dataframes, horizontally. `df2` contains 2 result variables for every observation in `df1`. ``` df1.shape (242583, 172) df2.shape (242583, 2) ``` My code is: ``` Fin = pd.concat([df1, df2], axis= 1) ``` But somehow the result is stacked in 2 dimensions: ``` Fin.shape (485166, 174) ``` What am I missing here?<issue_comment>username_1: There are different index values, so indexes are not aligned and get `NaN`s: ``` df1 = pd.DataFrame({ 'A': ['a','a','a'], 'B': range(3) }) print (df1) A B 0 a 0 1 a 1 2 a 2 df2 = pd.DataFrame({ 'C': ['b','b','b'], 'D': range(4,7) }, index=[5,7,8]) print (df2) C D 5 b 4 7 b 5 8 b 6 ``` --- ``` Fin = pd.concat([df1, df2], axis= 1) print (Fin) A B C D 0 a 0.0 NaN NaN 1 a 1.0 NaN NaN 2 a 2.0 NaN NaN 5 NaN NaN b 4.0 7 NaN NaN b 5.0 8 NaN NaN b 6.0 ``` One possible solution is create default indexes: ``` Fin = pd.concat([df1.reset_index(drop=True), df2.reset_index(drop=True)], axis= 1) print (Fin) A B C D 0 a 0 b 4 1 a 1 b 5 2 a 2 b 6 ``` Or assign: ``` df2.index = df1.index Fin = pd.concat([df1, df2], axis= 1) print (Fin) A B C D 0 a 0 b 4 1 a 1 b 5 2 a 2 b 6 df1.index = df2.index Fin = pd.concat([df1, df2], axis= 1) print (Fin) A B C D 5 a 0 b 4 7 a 1 b 5 8 a 2 b 6 ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: If you are looking for the one-liner, there is the `set_index` method: ``` import pandas as pd x = pd.DataFrame({'A': ["a"] * 3, 'B': range(3)}) y = pd.DataFrame({'C': ["b"] * 3, 'D': range(4,7)}) pd.concat([x, y.set_index(x.index)], axis = 1) ``` Note that `pd.concat([x, y], axis = 1)` will instead create new lines and produce NA values, due to non-matching indexes, as shown by @username_1 Upvotes: 0
2018/03/14
1,018
2,561
<issue_start>username_0: This is my controller method: ``` public function store(Request $request, $course_id){ $price['value'] = $request->price; $price['desc'] = $request->desc; If ($request->promo === "on") { $price['promo'] = 1; } else { $price['promo'] = 0; } $course = Course::find($course_id); Price::create($price)->courses()->save($course); return redirect('/course/'.$course_id.'/edit'); } ``` For example if my request var have: ``` $request->price = 100 $request->desc = 'blablabla' $request->promo = 'on' ``` After I submit the form I got two new rows with the same data: Table prices: ``` id | price | desc | promo 1 | 100 | blablabla | on 2 | 100 | blablabla | on ``` Some info on models, I've: ``` public function courses () { return $this->belongsToMany('App\Course', 'course_price'); } ``` in prices model and: ``` public function prices () { return $this->belongsToMany('App\Price', 'course_price'); } ``` in courses model. Whats wrong?<issue_comment>username_1: There are different index values, so indexes are not aligned and get `NaN`s: ``` df1 = pd.DataFrame({ 'A': ['a','a','a'], 'B': range(3) }) print (df1) A B 0 a 0 1 a 1 2 a 2 df2 = pd.DataFrame({ 'C': ['b','b','b'], 'D': range(4,7) }, index=[5,7,8]) print (df2) C D 5 b 4 7 b 5 8 b 6 ``` --- ``` Fin = pd.concat([df1, df2], axis= 1) print (Fin) A B C D 0 a 0.0 NaN NaN 1 a 1.0 NaN NaN 2 a 2.0 NaN NaN 5 NaN NaN b 4.0 7 NaN NaN b 5.0 8 NaN NaN b 6.0 ``` One possible solution is create default indexes: ``` Fin = pd.concat([df1.reset_index(drop=True), df2.reset_index(drop=True)], axis= 1) print (Fin) A B C D 0 a 0 b 4 1 a 1 b 5 2 a 2 b 6 ``` Or assign: ``` df2.index = df1.index Fin = pd.concat([df1, df2], axis= 1) print (Fin) A B C D 0 a 0 b 4 1 a 1 b 5 2 a 2 b 6 df1.index = df2.index Fin = pd.concat([df1, df2], axis= 1) print (Fin) A B C D 5 a 0 b 4 7 a 1 b 5 8 a 2 b 6 ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: If you are looking for the one-liner, there is the `set_index` method: ``` import pandas as pd x = pd.DataFrame({'A': ["a"] * 3, 'B': range(3)}) y = pd.DataFrame({'C': ["b"] * 3, 'D': range(4,7)}) pd.concat([x, y.set_index(x.index)], axis = 1) ``` Note that `pd.concat([x, y], axis = 1)` will instead create new lines and produce NA values, due to non-matching indexes, as shown by @username_1 Upvotes: 0
2018/03/14
908
2,009
<issue_start>username_0: I have a text file(can be loaded into Excel) with three columns. I want to remove duplicate rows(by duplicate values in row-2) and keep the only one row where row-3 is higher value. For example the date is ``` Col1 Col2 Col3 abcd 1111 1000 efgh 1111 1001 ijkl 2222 1002 mnop 1111 1003 qrst 3333 1004 uvwx 1111 1005 xwvu 2222 1006 ``` I want following output ``` Col1 Col2 Col3 uvwx 1111 1005 xwvu 2222 1006 qrst 3333 1004 ``` thanks a lot in advance.<issue_comment>username_1: There are different index values, so indexes are not aligned and get `NaN`s: ``` df1 = pd.DataFrame({ 'A': ['a','a','a'], 'B': range(3) }) print (df1) A B 0 a 0 1 a 1 2 a 2 df2 = pd.DataFrame({ 'C': ['b','b','b'], 'D': range(4,7) }, index=[5,7,8]) print (df2) C D 5 b 4 7 b 5 8 b 6 ``` --- ``` Fin = pd.concat([df1, df2], axis= 1) print (Fin) A B C D 0 a 0.0 NaN NaN 1 a 1.0 NaN NaN 2 a 2.0 NaN NaN 5 NaN NaN b 4.0 7 NaN NaN b 5.0 8 NaN NaN b 6.0 ``` One possible solution is create default indexes: ``` Fin = pd.concat([df1.reset_index(drop=True), df2.reset_index(drop=True)], axis= 1) print (Fin) A B C D 0 a 0 b 4 1 a 1 b 5 2 a 2 b 6 ``` Or assign: ``` df2.index = df1.index Fin = pd.concat([df1, df2], axis= 1) print (Fin) A B C D 0 a 0 b 4 1 a 1 b 5 2 a 2 b 6 df1.index = df2.index Fin = pd.concat([df1, df2], axis= 1) print (Fin) A B C D 5 a 0 b 4 7 a 1 b 5 8 a 2 b 6 ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: If you are looking for the one-liner, there is the `set_index` method: ``` import pandas as pd x = pd.DataFrame({'A': ["a"] * 3, 'B': range(3)}) y = pd.DataFrame({'C': ["b"] * 3, 'D': range(4,7)}) pd.concat([x, y.set_index(x.index)], axis = 1) ``` Note that `pd.concat([x, y], axis = 1)` will instead create new lines and produce NA values, due to non-matching indexes, as shown by @username_1 Upvotes: 0
2018/03/14
2,738
9,500
<issue_start>username_0: When I type in `MyFloatInput` TextInput then text start from `right` side of TextInput,It's working perfect but I set value of `MyFloatInput` TextInput from `.py` then it start from left side.It doesn't show in right side. Can someone tell me what is wrong with code? test.py ======= ``` import kivy from kivy.uix.screenmanager import Screen from kivy.app import App from kivy.lang import Builder from kivy.core.window import Window from kivy.clock import Clock from kivy.uix.textinput import TextInput Window.clearcolor = (0.5, 0.5, 0.5, 1) Window.size = (400, 100) class MyFloatInput(TextInput): def __init__(self, **kwargs): super(MyFloatInput, self).__init__(**kwargs) self.multiline = False def right_adjust(self, text): max_width = self.width - self.padding[0] - self.padding[2] new_text = text text_width = self._get_text_width(new_text, self.tab_width, self._label_cached) while text_width < max_width: new_text = ' ' + new_text text_width = self._get_text_width(new_text, self.tab_width, self._label_cached) while text_width >= max_width: if new_text[0] != ' ': break else: new_text = new_text[1:] text_width = self._get_text_width(new_text, self.tab_width, self._label_cached) return new_text def delete_selection(self, from_undo=False): if not self._selection: return cr = self.cursor[1] initial_len = len(self._lines[cr]) a, b = self._selection_from, self._selection_to if a > b: a, b = b, a super(MyFloatInput, self).delete_selection(from_undo=from_undo) cur_text = self._lines[cr] super(MyFloatInput, self)._refresh_text(self.right_adjust(cur_text)) final_len = len(self._lines[cr]) self.cursor = self.get_cursor_from_index(final_len - (initial_len - b)) def do_backspace(self, from_undo=False, mode='bkspc'): cc, cr = self.cursor initial_len = len(self._lines[cr]) super(MyFloatInput, self).do_backspace(from_undo=from_undo, mode=mode) cc, cr = self.cursor cur_text = self._lines[cr] super(MyFloatInput, self)._refresh_text(self.right_adjust(cur_text)) final_len = len(self._lines[cr]) self.cursor = self.get_cursor_from_index(final_len - (initial_len-cc) + 1) def insert_text(self, the_text, from_undo=False): cc, cr = self.cursor cur_text = self._lines[cr] initial_len = len(cur_text) new_text = self.right_adjust(cur_text[:cc] + the_text + cur_text[cc:]) try: num = float(new_text) # throw exception if new_text is invalid float except ValueError: return self._lines[cr] = '' super(MyFloatInput, self).insert_text(new_text, from_undo=from_undo) final_len = len(self._lines[cr]) self.cursor = self.get_cursor_from_index(final_len - (initial_len-cc)) def set_right_adj_text(self, text): num = float(text) # throws exception if text is invalid float self._refresh_text(self.right_adjust(text)) def on_text(self, instance, text): #num = float(text) # throws exception if text is invalid float self._refresh_text(self.right_adjust(text)) class Testing(Screen): def __init__(self, **kwargs): super(Testing, self).__init__(**kwargs) Clock.schedule_once(lambda dt: setattr(self.test, 'text', str(100))) class Test(App): def build(self): self.root = Builder.load_file('test.kv') return self.root if __name__ == '__main__': Test().run() ``` test.kv ======= ``` Testing: test:test BoxLayout: orientation: "vertical" padding : 20, 20 BoxLayout: orientation: "horizontal" padding: 10, 10 spacing: 10, 10 size_hint_x: .6 Label: text: "No." text_size: self.size valign: 'middle' size_hint_x: .2 MyFloatInput: size_hint_x: .6 id : test ```<issue_comment>username_1: I don't know of any way to do it using the options available for `TextInput`. But it can be done by extending `TextInput`. Here is a `MyFloatInput` class: ``` import string class MyFloatInput(TextInput): def __init__(self, **kwargs): super(MyFloatInput, self).__init__(**kwargs) self.multiline = False def insert_text(self, theText, from_undo=False): if theText not in string.digits and theText != '.': return if '.' in self.text and theText == '.': return maxWidth = self.width - self.padding[0] - self.padding[2] cc, cr = self.cursor curText = self._lines[cr] new_text = curText[:cc] + theText + curText[cc:] textWidth = self._get_text_width(new_text, self.tab_width, self._label_cached) while textWidth < maxWidth: new_text = ' ' + new_text textWidth = self._get_text_width(new_text, self.tab_width, self._label_cached) while textWidth >= maxWidth: if new_text[0] != ' ': break else: new_text = new_text[1:] textWidth = self._get_text_width(new_text, self.tab_width, self._label_cached) self._lines[cr] = '' self.cursor = (0,cr) super(MyFloatInput, self).insert_text(new_text, from_undo=from_undo) ``` This should do what you want and does the `float` filtering as well. To use it, include it in your .py file and replace the `TextInput` section of your .kv file with: ``` MyFloatInput: size_hint_x: .2 ``` Note that this only works for single line input, so the `__init__` method sets `multiline` to `False`. This code uses methods and variables that start with `_`, so the code may break if the `TextInput` class gets updated. Upvotes: 2 <issue_comment>username_1: I made significant changes to the `MyFloatInput` class. It now no longer needs the `string` import. It now handles deleting a selection, and you can now set the text from a .py file using the `set_right_adj_text("some text")` method. Here is the improved class: ``` class MyFloatInput(TextInput): def __init__(self, **kwargs): super(MyFloatInput, self).__init__(**kwargs) self.multiline = False def right_adjust(self, text): max_width = self.width - self.padding[0] - self.padding[2] new_text = text text_width = self._get_text_width(new_text, self.tab_width, self._label_cached) while text_width < max_width: new_text = ' ' + new_text text_width = self._get_text_width(new_text, self.tab_width, self._label_cached) while text_width >= max_width: if new_text[0] != ' ': break else: new_text = new_text[1:] text_width = self._get_text_width(new_text, self.tab_width, self._label_cached) return new_text def on_size(self, instance, value): super(MyFloatInput, self).on_size(instance, value) if len(self._lines) == 0: return True cc, cr = self.cursor cur_text = self._lines[cr] initial_len = len(cur_text) super(MyFloatInput, self)._refresh_text(self.right_adjust(cur_text)) final_len = len(self._lines[cr]) self.cursor = self.get_cursor_from_index(final_len - (initial_len - cc)) return True def delete_selection(self, from_undo=False): if not self._selection: return cr = self.cursor[1] initial_len = len(self._lines[cr]) a, b = self._selection_from, self._selection_to if a > b: a, b = b, a super(MyFloatInput, self).delete_selection(from_undo=from_undo) cur_text = self._lines[cr] super(MyFloatInput, self)._refresh_text(self.right_adjust(cur_text)) final_len = len(self._lines[cr]) self.cursor = self.get_cursor_from_index(final_len - (initial_len - b)) def do_backspace(self, from_undo=False, mode='bkspc'): cc, cr = self.cursor initial_len = len(self._lines[cr]) super(MyFloatInput, self).do_backspace(from_undo=from_undo, mode=mode) cc, cr = self.cursor cur_text = self._lines[cr] super(MyFloatInput, self)._refresh_text(self.right_adjust(cur_text)) final_len = len(self._lines[cr]) self.cursor = self.get_cursor_from_index(final_len - (initial_len-cc) + 1) def insert_text(self, the_text, from_undo=False): cc, cr = self.cursor cur_text = self._lines[cr] initial_len = len(cur_text) new_text = self.right_adjust(cur_text[:cc] + the_text + cur_text[cc:]) try: num = float(new_text) # throw exception if new_text is invalid float except ValueError: return self._lines[cr] = '' super(MyFloatInput, self).insert_text(new_text, from_undo=from_undo) final_len = len(self._lines[cr]) self.cursor = self.get_cursor_from_index(final_len - (initial_len-cc)) def set_right_adj_text(self, text): num = float(text) # throws exception if text is invalid float self._refresh_text(self.right_adjust(text)) ``` Upvotes: 3 [selected_answer]
2018/03/14
804
3,226
<issue_start>username_0: I have seen the examples but I'm hoping to run this by other programmers. For encryption within my window forms app, I am generating two random numbers and saving them in an SQL Server table like thus: ``` OPEN SYMMETRIC KEY SymmetricKeyName DECRYPTION BY CERTIFICATE CertificateName; insert into keyfile(encrypted_key1, encrypted_key2) values (EncryptByKey(Key_GUID('SymmetricKeyName'), **Key1**), EncryptByKey(Key_GUID('SymmetricKeyName'), **Key2**)) ``` Then I am using the keys to encrypt a file using AES-256 as follows: ``` var key = new Rfc2898DeriveBytes(**Key1, Key2**, 1000); RijndaelManaged AES = new RijndaelManaged(); AES.KeySize = 256; AES.BlockSize = 128; AES.Key = key.GetBytes(AES.KeySize / 8); AES.IV = key.GetBytes(AES.BlockSize / 8); AES.Padding = PaddingMode.Zeros; AES.Mode = CipherMode.CBC; using (var output = File.Create(outputFile)) { using (var crypto = new CryptoStream(output, AES.CreateEncryptor(), CryptoStreamMode.Write)) { using (var input = File.OpenRead(inputFile)) { input.CopyTo(crypto); } } } etc. ``` In order to perform decryption both keys that were used to encrypt the file are required. Decryption is possible through software requiring two authenticated users. The keys change every day. The data and the database are sufficiently physically secure. The key table is in a separate database from the certificate. The question is: Does this secure the data enough to not be readily be decrypted and, if now, why not and what changes might you suggest?<issue_comment>username_1: The problem here is that there is a high probability that anyone that is able to obtain the file is also able to obtain the data in the database (which includes the key). Once the data is compromised, it doesn't matter how often you change the key since the attacker would have a copy of the file encrypted with the key that matches it. Common solutions to this problem are to use an external [Hardware Security Module](https://en.wikipedia.org/wiki/Hardware_security_module) or something like a [TPM](https://en.wikipedia.org/wiki/Trusted_Platform_Module). [Here is a very useful and related post](https://security.stackexchange.com/questions/12332/where-to-store-a-server-side-encryption-key/12334#12334) that enumerates several options. Upvotes: 4 [selected_answer]<issue_comment>username_2: As suggested by others, you can store the key on a USB, alternatively a network share. However if on a network share you might need to change the Service Logon to an account with access to the network share. Upvotes: 2 <issue_comment>username_3: "SYMMETRIC KEY" might be an issue here. They are similar to a xor operation, the same key works to both encrypt and decrypt. For a 'cooler' method, use ASYMMETRIC keys instead, then the database can keep the 'how to encrypt' half, while your application can have the 'how to decrypt' half. It's a lot of effort, but "not even the DBAs can see the secret data" is a cool feature. Upvotes: 1
2018/03/14
721
3,033
<issue_start>username_0: I am having some trouble passing a C++/CLI object pointer to a native object. The entire picture is the following: * I am new to C++ in general (doomed) * I am using a third party native C++ library to interface a blackmagic IO video card. In the API there is a very handy method to pass a pointer of the object that will handle the frame callback while they are captured by the card: SetCallback(Pointer to an object that implement an interface). * In the above SetCallback(Pointer) I would like to pass the pointer to my C++/CLI object. When I do so I get: `cannot convert argument 4 from 'CLIInterop::Wrapper ^*' to 'IDeckLinkInputCallback *'` My final target is to handle the callback from C++ into C++/CLI and at this point pass the frame over to WPF (if I will ever get that far) The line of code invoved are: Call from CLIInterop::Wrapper object ``` d_Controller->GetDevice()->StartCapture(0, nullptr, true, this); ``` Method header in the native C++ project: ``` __declspec(dllexport) bool DeckLinkDevice::StartCapture(unsigned int videoModeIndex, IDeckLinkScreenPreviewCallback* screenPreviewCallback, bool applyDetectedInputMode, IDeckLinkInputCallback* callbackHandler); ``` Help!<issue_comment>username_1: It is clearly indicate that your `this` pointer is not a type `IDeckLinkInputCallback` ``` d_Controller->GetDevice()->StartCapture(0, nullptr, true, this); ^ this pointer is not a type IDeckLinkInputCallback ``` As you told that you have already implement interface `IDeckLinkInputCallback` in the class of `this` pointer. Double check whether you have done it. Instead of calling `StartCapture` from the member function of the class better call it from outside and provide full address of the object instead `this` pointer. Upvotes: 0 <issue_comment>username_2: You cannot just pass a managed reference ("hat pointer" ^) when a native pointer is expected. The whole point of C++/CLI is the possibility to create "glue" code such as what you're missing. Basically, you would have to create a native class that implements the native interface, which may contain the managed reference that you call back to. I'm not familiar with the BlackMagic video card's interface (I used to have to work with DVS video cards, but their software interfaces are probably hardly comparable), but the general logic for such a wrapper would be similar to this: ``` class MyDeckLinkInputCallback : IDeckLinkInputCallback { public: MyDeckLinkInputCallback(CLIInterop::Wrapper^ wrapper) { _wrapper = wrapper; // initialize to your heart's content } private: CLIInterop::Wrapper^ _wrapper; public: // TODO implement IDeckLinkInputCallback properly; this is just a crude example void HandleFrame(void* frameData) { // TODO convert native arguments to managed equivalents _wrapper->HandleFrame(...); // call managed method with converted arguments } }; ``` Upvotes: -1
2018/03/14
711
2,221
<issue_start>username_0: I am looking to identify parts of a string that are hex. So if you consider the string CHICKENORBEEFPIE, the match would be BEEF. To do this I came up with this expression `/[A-F0-9]{2,}(?![^A-F0-9])/g` This works perfectly - except it only matches BEE, not BEEF. Unless BEEF happened to be at the end of the string.<issue_comment>username_1: Use > > /[A-F0-9]{2,}(?![^A-F0-9])\*/g > > > Upvotes: -1 <issue_comment>username_2: The negative lookahead `(?![^A-F0-9])` means: do not match anything followed by any characters other than A-F, 0-9. Which translates to match pattern followed by A-F, 0-9. Your regex is matching 'BEE' because it is followed by F, which satisfies the condition. If you want to identify sequences of two or more characters that are hex code, just eliminate the negative lookahead altogether. `/[A-F0-9]{2,}/g` translates to: Find as many matches, a pattern consisting of A-F or 0-9 that are 2 or more characters long. Upvotes: 1 <issue_comment>username_3: It is because the last part of your regex: `(?![^A-F0-9])` Because of that, you are matching any strings that aren't followed by a non-hex character... which ultimately means to find strings where the next character **is** a hex character. You could either remove the `^` or remove that whole piece altogether as it isn't necessary. The following will retrieve what you are looking for: `/[A-F0-9]{2,}/g` Upvotes: 0 <issue_comment>username_4: * `[A-F0-9]{2,}(?![A-F0-9])` will match what is expected, however negative lookahead is superfluous because quantifier are greedy by default. * `[A-F0-9]{2,}(?![^A-F0-9])` doesn't work because assertion is that following character must *not* be any character *except* A-F0-9 (double negation). the reason why the last character `F` in `BEEF` is not matched is that after matching `BEEF`, negtaive lookahead fails `P` is in `[^A-F0-9]` which makes backtrack to `BEE` which success because `F` is not in `[^A-F0-9]`. Upvotes: 0 <issue_comment>username_5: If you need the given result with pair-based values you can use `/([A-F0-9]{2})+/g`, if not (if it doesn't matter whether it's odd or not) you can use `/[A-F0-9]{2,}/g` instead. Hope it helps. Upvotes: 0
2018/03/14
724
2,297
<issue_start>username_0: **WHAT I HAVE** Thanks to [this discussion](https://stackoverflow.com/a/13997498/1399706) I've solved the problem of the "always https" redirect using this in my `.htaccess`: ``` # Redirect to httpS by default RewriteCond %{HTTPS} off RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # Redirect to www if third level domain is not set RewriteCond %{HTTP_HOST} ^example.com [NC] RewriteRule ^(.*)$ https://www.example.com/$1 [R=301,L] ``` **WHAT I WANT** As per the current configuration, `https://example.com` is correctly redirected to `https://www.example.com`. BUT I need that if the URL contains `wp-admin` it isn't redirected to `www`. So, for example, `https://example.com/wp-admin` hasn't to be redirected. And also every subpage of the path `wp-admin` hasn't to be redirected. `https://example.com/wp-admin/login.php` has to be visible without redirecting to `www` but from the second level domain. **CONTEXT** (if you are curious about why I need this configuration) I have the domain `example.com`. This domain has some third level domains and has a Wordpress admin area at `example.com` (second level): * `example.com` * `www.example.com` * `help.example.com` * `another.example.com` But * `www.example.com` is a Symfony app that runs on Heroku; * `example.com` is a Wordpress multisite installation that runs on DigitalOcean; * `help.example.com` and `another.example.com` are Wordpress sites handled with the multisite of `example.com` For these reasons I need to redirect the second level domain to `www` if in the path there isn't `wp-admin`. In all other cases I need the redirect to `www`.<issue_comment>username_1: You want redirect "http to https" ? put this in htaccess ``` RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} ``` Upvotes: 1 <issue_comment>username_2: Ok, it was as simple as adding a second `RewriteCond`. The complete `.htaccess` is this: ``` # username_2 - Force SSL RewriteCond %{HTTPS} off # First rewrite to HTTPS: RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # Then redirect to www RewriteCond %{HTTP_HOST} ^example.com [NC] RewriteCond %{REQUEST_URI} !wp-admin [NC] RewriteRule ^(.*)$ https://www.example.com/$1 [R=301,L] ``` Upvotes: 0
2018/03/14
622
2,377
<issue_start>username_0: Is there a difference between those two bs4 objects? ``` from urllib2 import urlopen, Request from bs4 import BeautifulSoup req1 = Request("https://stackoverflow.com/") # HTTPS html1 = urlopen(req1).read() req2 = Request("http://stackoverflow.com/") # HTTP html2 = urlopen(req2).read() bsObj1 = BeautifulSoup(html1, "html.parser") bsObj2 = BeautifulSoup(html2, "html.parser") ``` Do you really need to specify an HTTP protocol?<issue_comment>username_1: Here's my limited understanding: There isn't a practical difference in this case. My understanding is that most websites that have https will redirect http URLs to https, as is the case here. It's possible for a site to have an http version and an https version up simultaneously, in which case they might not redirect. This would be bad practice, but nothing is stopping someone from doing it. I would still explicitly use https whenever possible, just as a best practice. Upvotes: 3 [selected_answer]<issue_comment>username_2: All communication over the HTTP protocol happens using HTTP verbs GET, POST, PUT, DELETE. Specifying the protocol has two purposes: 1) **It specifies the scheme for data communication**. A general URI is of the form: **scheme:[//[user[:password]@]host[:port]][/path][?query][#fragment]** and common schemes are **http(s), ftp, mailto, file, data, and irc**. 2) **It specifies if the scheme supports SSL encryption**: With http schemes, the added *'s'* in `https` ensures SSL encryption of data. According to `urllib3` Python [docs](https://urllib3.readthedocs.io/en/latest/user-guide.html#certificate-verification): > > It is highly recommended to always use SSL certificate verification.In order to enable verification you will need a set of root certificates. The easiest and most reliable method is to use the `certifi` package which provides Mozilla’s root certificate bundle: > > > ``` pip install certifi >>> import certifi >>> import urllib3 >>> http = urllib3.PoolManager( ... cert_reqs='CERT_REQUIRED', ... ca_certs=certifi.where()) ``` The **PoolManager** will automatically handle certificate verification and will raise SSLError if verification fails: ``` >>> http.request('GET', 'https://google.com') (No exception) >>> http.request('GET', 'https://expired.badssl.com') urllib3.exceptions.SSLError ... ``` Upvotes: 1
2018/03/14
1,204
4,299
<issue_start>username_0: I am currently going through [the Rust Documentation](https://doc.rust-lang.org/beta/reference/items/implementations.html#inherent-implementations) to understand inherent implementations. What is the "nominal type" and what are they referring to when then they say "associable items to the implementing type"? Is there a related analog to this in C or C++?<issue_comment>username_1: An inherent implementation is the equivalent of creating a class in a OOP language. The difference in Rust is that data is separated from implementation: ``` /* Data */ struct Foo { // all data there //... } /* Inherent implementation */ impl Foo { fn bar(&self) { //... } } ``` * The *nominal type* is the data that you implement. * The *associable items* are the methods that you add to the data. Those functions are special because you can call them with the syntax `foo.bar()`. --- The *inherent implementation* is called like that as opposed to *trait implementation*: ``` /* Trait implementation */ impl Debug for Foo { fn fmt(&self, f: &mut Formatter) -> Result<(), Error> { //... } } ``` * In the case of inherent implementation, the method is bound to the data only. It has a sense only with *this* data. * In the case of a trait implementation, the method can be implemented for any data that implements the trait. --- The equivalent in C++ could be: ```cpp struct Debug { virtual std::string fmt() = 0; } class Foo: public Debug { // all data there //... public: /* Equivalent of inherent implementation */ void bar() { //... } /* Equivalent of trait implementation: implementation of base class */ std::string fmt() { //... } } ``` In C++, you cannot separate "inherent implementation" from "trait implementation" (I put those between quotes, because those terms do not make sense in C++, of course). --- Note that unlike in C++, in Rust the methods are not really different that a free function. You can call the `bar` method like this: ``` Foo::bar(foo); ``` and if you define this function: ``` fn qux(f: &Foo) { //... } ``` it will have the same signature as `Foo::bar`. Upvotes: 1 <issue_comment>username_2: Well, that's the language reference. Learning Rust with that is certainly possible, but a little bit like trying to learn English by reading a dictionary. Have you tried the [Rust Book](https://doc.rust-lang.org/book/)? Anyway, as the first paragraph states, the "nominal type" is, well: ``` impl /* --> */ Point /* <-- this is the "nominal type" */ { fn log(&self) { ... } } ``` It's the type which is the subject of the inherent `impl`. An "associable item" is an item (like a `fn`, `const`, or `type`) which is associated with the nominal type. If you had the paragraph: > > Let's talk about Raymond. His hair is brown. He knows how to dance. > > > That would be roughly equivalent to: ``` struct Raymond; // introduce the Raymond type. impl Raymond { // associate some items with Raymond. const HAIR: Colour = Colour::Brown; fn dance(&mut self) { ... } } fn main() { let mut ray = Raymond; println!("Raymond's hair is {}", Raymond::HAIR); ray.dance(); } ``` (As an aside: the pronouns in the paragraph (like "he" or "him") would become `self` or `Self` in the `impl`.) The `impl` is associating those items with the nominal type. Notice how `HAIR` is "inside" of the `Raymond` type. You could *also* write the above code as: ``` struct Raymond; const RAYMOND_HAIR: Colour = Colour::Brown; fn raymond_dance(ray: &mut Raymond) { ... } fn main() { let mut ray = Raymond; println!("Raymond's hair is {}", RAYMOND_HAIR); raymond_dance(&mut ray); } ``` Here, there're no inherent `impl`s, so the `RAYMOND_HAIR` and `raymond_dance` items aren't associated with the `Raymond` type directly. There's no fundamental difference between the two, other than convenience. As for tying this back to C++... that's tricky since Rust distinguishes between inherent and non-inherent `impl`s and C++... *doesn't*. The closest analogue would be to say that they're like the parts of a `struct` body that aren't fields and aren't overriding methods in a base class. Upvotes: 4 [selected_answer]
2018/03/14
2,453
6,437
<issue_start>username_0: I'm trying to plot cuboids of different sizes using matplotlib, such that: after rotation the cuboids do not overlap visually in a non-physical way, the cubes have different colors and a box drawn around them. I've read several blog posts and stackoverflow pages referencing similar problems, but always with a slight difference; none which have worked for me. The easiest way to overcome the overlapping problem was to use voxels (as in <https://matplotlib.org/api/_as_gen/mpl_toolkits.mplot3d.axes3d.Axes3D.html?highlight=voxel#mpl_toolkits.mplot3d.axes3d.Axes3D.voxels>), but these do not allow me to draw boxes around them. What's the easiest way to do this in matplotlib? The image below shows what I have on the left, and what I want on the right. EDIT: I've looked into several approaches that can give the desired effect, of which the main ones are: * using voxels, but somehow scaling them such that a single voxel represents a single item. * using surface plots, but then adjusting the drawing order dynamically to avoid non-physical overlapping. The former seemed easier to execute, but I'm still stumped. [![Left: what I get. Right: what I want](https://i.stack.imgur.com/WZimK.png)](https://i.stack.imgur.com/WZimK.png)<issue_comment>username_1: A. Using `Poly3DCollection` =========================== An option is to create a `Poly3DCollection` of the faces of the cuboids. As the overlapping issue is not present for artists of the same collection, this might best serve the purpose here. ``` from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d.art3d import Poly3DCollection import numpy as np import matplotlib.pyplot as plt def cuboid_data2(o, size=(1,1,1)): X = [[[0, 1, 0], [0, 0, 0], [1, 0, 0], [1, 1, 0]], [[0, 0, 0], [0, 0, 1], [1, 0, 1], [1, 0, 0]], [[1, 0, 1], [1, 0, 0], [1, 1, 0], [1, 1, 1]], [[0, 0, 1], [0, 0, 0], [0, 1, 0], [0, 1, 1]], [[0, 1, 0], [0, 1, 1], [1, 1, 1], [1, 1, 0]], [[0, 1, 1], [0, 0, 1], [1, 0, 1], [1, 1, 1]]] X = np.array(X).astype(float) for i in range(3): X[:,:,i] *= size[i] X += np.array(o) return X def plotCubeAt2(positions,sizes=None,colors=None, **kwargs): if not isinstance(colors,(list,np.ndarray)): colors=["C0"]*len(positions) if not isinstance(sizes,(list,np.ndarray)): sizes=[(1,1,1)]*len(positions) g = [] for p,s,c in zip(positions,sizes,colors): g.append( cuboid_data2(p, size=s) ) return Poly3DCollection(np.concatenate(g), facecolors=np.repeat(colors,6), **kwargs) positions = [(-3,5,-2),(1,7,1)] sizes = [(4,5,3), (3,3,7)] colors = ["crimson","limegreen"] fig = plt.figure() ax = fig.gca(projection='3d') ax.set_aspect('equal') pc = plotCubeAt2(positions,sizes,colors=colors, edgecolor="k") ax.add_collection3d(pc) ax.set_xlim([-4,6]) ax.set_ylim([4,13]) ax.set_zlim([-3,9]) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/e3VBn.png)](https://i.stack.imgur.com/e3VBn.png) B. Using `plot_surface` ======================= Adapting the solution from [this question](https://stackoverflow.com/questions/42611342/representing-voxels-with-matplotlib), which uses `plot_surface`, and allow for different sizes as desired here seems to work just fine for most cases: ``` from mpl_toolkits.mplot3d import Axes3D import numpy as np import matplotlib.pyplot as plt def cuboid_data(o, size=(1,1,1)): # code taken from # https://stackoverflow.com/a/35978146/4124317 # suppose axis direction: x: to left; y: to inside; z: to upper # get the length, width, and height l, w, h = size x = [[o[0], o[0] + l, o[0] + l, o[0], o[0]], [o[0], o[0] + l, o[0] + l, o[0], o[0]], [o[0], o[0] + l, o[0] + l, o[0], o[0]], [o[0], o[0] + l, o[0] + l, o[0], o[0]]] y = [[o[1], o[1], o[1] + w, o[1] + w, o[1]], [o[1], o[1], o[1] + w, o[1] + w, o[1]], [o[1], o[1], o[1], o[1], o[1]], [o[1] + w, o[1] + w, o[1] + w, o[1] + w, o[1] + w]] z = [[o[2], o[2], o[2], o[2], o[2]], [o[2] + h, o[2] + h, o[2] + h, o[2] + h, o[2] + h], [o[2], o[2], o[2] + h, o[2] + h, o[2]], [o[2], o[2], o[2] + h, o[2] + h, o[2]]] return np.array(x), np.array(y), np.array(z) def plotCubeAt(pos=(0,0,0), size=(1,1,1), ax=None,**kwargs): # Plotting a cube element at position pos if ax !=None: X, Y, Z = cuboid_data( pos, size ) ax.plot_surface(X, Y, Z, rstride=1, cstride=1, **kwargs) positions = [(-3,5,-2),(1,7,1)] sizes = [(4,5,3), (3,3,7)] colors = ["crimson","limegreen"] fig = plt.figure() ax = fig.gca(projection='3d') ax.set_aspect('equal') for p,s,c in zip(positions,sizes,colors): plotCubeAt(pos=p, size=s, ax=ax, color=c) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/qItlA.png)](https://i.stack.imgur.com/qItlA.png) Upvotes: 4 <issue_comment>username_2: The following code will not only work for cuboid but for any polygon **Type your coordinates for x, y and z respectively** ``` import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d from mpl_toolkits.mplot3d.art3d import Poly3DCollection #input values x=[1,10,50,100,150] y=[1,300,350,250,50] z=[0,1] def edgecoord(pointx,pointy,pointz): edgex=[pointx[0],pointx[1],pointx[1],pointx[0]] edgey=[pointy[0],pointy[1],pointy[1],pointy[0]] edgez=[pointz[0],pointz[0],pointz[1],pointz[1]] return list(zip(edgex,edgey,edgez)) def coordConvert(x,y,lheight,uheight): if len(x) != len(y) and len(x)>2: return vertices=[] #Top layer vertices.append(list(zip(x,y,list(np.full(len(x),uheight))))) # Side layers for it in np.arange(len(x)): it1=it+1 if it1>=len(x): it1=0 vertices.append(edgecoord([x[it],x[it1]],[y[it],y[it1]],[lheight,uheight])) #Bottom layer vertices.append(list(zip(x,y,list(np.full(len(x),lheight))))) print(np.array(vertices)) return vertices vec=coordConvert(x,y,z[0],z[1]) plt.figure() plt.subplot(111,projection='3d') plt.gca().add_collection3d(Poly3DCollection(vec, alpha=.75,edgecolor='k', facecolor='teal')) plt.xlim([0,200]) plt.ylim([0,400]) plt.show() ``` [Polygon Prism](https://i.stack.imgur.com/02P5b.jpg) Upvotes: 2
2018/03/14
1,291
5,049
<issue_start>username_0: In my app there is three languages i.e. **ar** , **fr** & **en**. And Based on the app language is changing the app language and semantics. In my app, app language is properly changed as per the requirement but its semantics is not changing. Here is the tried code what I am doing when user changed the language. ``` let cur_lang = Localize.currentLanguage() print("current lang; \(cur_lang)") if cur_lang == "ar"{ UIView.appearance().semanticContentAttribute = .forceRightToLeft } else{ UIView.appearance().semanticContentAttribute = .forceLeftToRight } imgView.image = UIImage(named: "flag".localized()) lblTitle.text = "Title".localized() lblDescription.text = "Description".localized() lblName.text = "Name".localized() tfName.text = "textName".localized() ``` Here is the gif of the screen. [![enter image description here](https://i.stack.imgur.com/QI0v4.gif)](https://i.stack.imgur.com/QI0v4.gif) **Required Output:** [![enter image description here](https://i.stack.imgur.com/DSokp.png)](https://i.stack.imgur.com/DSokp.png) > > Edit > > > If I take all contents in a view , then it is working as expected but [natural text alignment](https://useyourloaf.com/blog/natural-text-alignment-for-rtl-languages/) and navigation bar is still uneffected. It is not possible to flip each and every view as it may reduce app performance if there is complex layout. So I tried `self.view.semanticContentAttribute = .forceRightToLeft` and it does not make any difference. My every UI component have leading trailing not left right and **respect language direction** is also selected. [![enter image description here](https://i.stack.imgur.com/WiWgh.png)](https://i.stack.imgur.com/WiWgh.png)<issue_comment>username_1: Semantic content attribute must be applied to the direct parent of the view you want to flip when change the language , so do this for the parent view of flag imageV and the 2 labels ``` self.directParentView.semanticContentAttribute = .forceRightToLeft ``` Upvotes: 1 <issue_comment>username_2: for textFields - use can use extension like this: ``` extension UITextField { open override func awakeFromNib() { super.awakeFromNib() if UserDefaults.languageCode == "ar" { if textAlignment == .natural { self.textAlignment = .right } } } } ``` Upvotes: 0 <issue_comment>username_3: 1. First step, you need to apply the semantic content attribute as mentioned in the other answers. ``` // get current language using your 'Localize' class let locale = Localize.currentLanguage() UIView.appearance().semanticContentAttribute = locale == "ar" ? .forceRightToLeft : .forceLeftToRight ``` However, this alone would not work because based on [Apple's Doc](https://developer.apple.com/documentation/uikit/uiappearance): > > iOS applies appearance changes when a view enters a window, it doesn’t change the appearance of a view that’s already in a window. To change the appearance of a view that’s currently in a window, remove the view from the view hierarchy and then put it back. > > > 2. The second step is in the doc itself. You only need to *remove the view from the view hierarchy and then put it back.* ``` let window = self.view.superview self.view.removeFromSuperview() window?.addSubview(self.view) ``` *Note: Since `self.view` is already instantiated and you're only removing it from the view hierarchy and putting it back, you don't need to worry about recreating the view from scratch.* :) [Voila!](https://i.stack.imgur.com/cHrUj.gif) Upvotes: 4 [selected_answer]<issue_comment>username_4: **Swift 4.2, 5.0** ``` if isArabic { UIView.appearance().semanticContentAttribute = .forceRightToLeft UIButton.appearance().semanticContentAttribute = .forceRightToLeft UITextView.appearance().semanticContentAttribute = .forceRightToLeft UITextField.appearance().semanticContentAttribute = .forceRightToLeft } else { UIView.appearance().semanticContentAttribute = .forceLeftToRight UIButton.appearance().semanticContentAttribute = .forceLeftToRight UITextView.appearance().semanticContentAttribute = .forceLeftToRight UITextField.appearance().semanticContentAttribute = .forceLeftToRight } ``` Upvotes: 3 <issue_comment>username_5: Another way to solve this problem is put your condition code in ``` override func viewWillLayoutSubviews() { if Language.language == .arabic { DispatchQueue.main.async { self.inputTextField.semanticContentAttribute = .forceRightToLeft self.inputTextField.textAlignment = .right } } else { DispatchQueue.main.async { self.inputTextField.semanticContentAttribute = .forceRightToLeft self.inputTextField.textAlignment = .left } } } ``` Simply work ;) Upvotes: 2
2018/03/14
998
3,290
<issue_start>username_0: here is my build.sbt ``` name := "tutu" organization := "com.tutu" version := "1.0" // Enables publishing to maven repo publishMavenStyle := true // Do not append Scala versions to the generated artifacts crossPaths := false // This forbids including Scala related libraries into the dependency autoScalaLibrary := false assemblyJarName in assembly := "tutu.jar" mainClass in assembly := Some(" com.tutu.tutuApplication") assemblyMergeStrategy in assembly := { case x if Assembly.isConfigFile(x) => MergeStrategy.concat case PathList(ps @ _*) if Assembly.isReadme(ps.last) || Assembly.isLicenseFile(ps.last) => MergeStrategy.rename case PathList("META-INF", xs @ _*) => (xs map {_.toLowerCase}) match { case ("io.netty.versions.properties" :: Nil) => MergeStrategy.discard case ("manifest.mf" :: Nil) | ("index.list" :: Nil) | ("dependencies" :: Nil) => MergeStrategy.discard case ps @ (x :: xs) if ps.last.endsWith(".sf") || ps.last.endsWith(".dsa") => MergeStrategy.discard case "plexus" :: xs => MergeStrategy.discard case "services" :: xs => MergeStrategy.filterDistinctLines case ("spring.schemas" :: Nil) | ("spring.handlers" :: Nil) => MergeStrategy.filterDistinctLines case _ => MergeStrategy.deduplicate } case PathList("javax", "inject", xs @ _*) => MergeStrategy.last case _ => MergeStrategy.deduplicate } libraryDependencies += "io.dropwizard" % "dropwizard-core" % "1.2.2" libraryDependencies += "com.squareup.okhttp3" % "okhttp" % "3.0.0-RC1" libraryDependencies += "junit" % "junit" % "4.11" % Test libraryDependencies += "io.dropwizard" % "dropwizard-testing" % "1.2.2" % Test testOptions += Tests.Argument(TestFrameworks.JUnit, "-q") ``` My project structure is * src + main + ... + test + java - com/tutu - api - client - core - db - resources * PingTest.java If i run ``` $ sbt test ``` It doesnt execute my PingTest but the @Test is correct defined. How can I make sbt to detect my Java Test classes?<issue_comment>username_1: You need to read [this](https://www.scala-sbt.org/1.x/docs/Testing.html#JUnit) about junit-interface. In short, you need to add dependency ``` "com.novocode" % "junit-interface" % "0.11" % Test ``` Also, when I resolved similar problem in my project I needed to add also ``` crossPaths := false ``` this is from [this](https://github.com/sbt/junit-interface/issues/35) issue comments. Upvotes: 2 [selected_answer]<issue_comment>username_2: [This same question seems to have been asked multiple times with slightly different framings](https://stackoverflow.com/questions/28174243/run-junit-tests-with-sbt). Multiple hits made it hard for me to find the simplest and most current answer. [This answer by @david.perez seems clear and works with current (2018) SBT 1.1.4.](https://stackoverflow.com/a/28051194/1148030) (That particular question was about conflicting JUnit versions. The `exclude("junit", "junit-dep")` may not be necessary.) I'll also copy-paste the code here for quick access: ``` libraryDependencies ++= Seq( "junit" % "junit" % "4.12" % Test, "com.novocode" % "junit-interface" % "0.11" % Test exclude("junit", "junit-dep") ) ``` Upvotes: 0
2018/03/14
693
2,605
<issue_start>username_0: I'm trying to connect to a Google Cloud SQL second generation in Python from AppEngine standard (Python 2.7). Until now, I was using MySQLDB driver directly and it was fine. I've tried to switch to SQLAlchemy, but now I'm always having this error when the code is deployed (it seems to work fine in local) resulting in a error 500 (It's not just some connections which are lost, it constantly fails) : ``` OperationalError: (_mysql_exceptions.OperationalError) (2013, "Lost connection to MySQL server at 'reading initial communication packet', system error: 38") (Background on this error at: http://sqlalche.me/e/e3q8) ``` I don't understand because the setup doesn't differ from before, so it must be related to the way I use SQLAlchemy. I use something like this : ``` create_engine("mysql+mysqldb://appuser:password@x.x.x.x/db_name?unix_socket=/cloudsql/gcpProject:europe-west1:instanceName") ``` I've tried different values (with, without the ip, ...). But it is still the same. Is is a version compatibility problem ? I use MySQL-python in the app.yaml and SQLAlchemy 1.2.4 : app.yaml : ``` - name: MySQLdb version: "latest" ``` requirements.txt : ``` SQLAlchemy==1.2.4 ```<issue_comment>username_1: There are a number of causes for connection loss to Google CloudSQL server but quite rightly, you have to ensure that your setup is appropriate first. I don't think this issue is about version compatibility. According to the [documentation](https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql#setting_connection_strings_and_adding_a_library), for your application to be able to connect to your Cloud SQL instance when the app is deployed, you require to add the user, password, database, and instance connection name variables from Cloud SQL to the related environment variables in the app.yaml file(Your displayed app.yaml does not seem to contain these environment variables). I recommend you review the details in the [link](https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql#configure-csql-instance) for details on how to set up your CloudSQL instance and connecting to the instance. Upvotes: 0 <issue_comment>username_2: It was a problem in the url. I was adding in a specific part of the code "/dbname" at the end of the connection string, resulting in something like this : mysql+mysqldb://appuser:password@/db\_name?unix\_socket=/cloudsql/gcpProject:europe-west1:instanceName/dbname So in the end, **the meaning of this error can also be that the unix socket is wrong.** Upvotes: 2 [selected_answer]
2018/03/14
1,084
3,352
<issue_start>username_0: I'm trying to make using of String.Substring() to replace every string with its substring from a certain position. I'm having a hard time figuring out the right syntax for this. ``` $dirs = Get-ChildItem -Recurse $path | Format-Table -AutoSize -HideTableHeaders -Property @{n='Mode';e={$_.Mode};width=50}, @{n='LastWriteTime';e={$_.LastWriteTime};width=50}, @{n='Length';e={$_.Length};width=50}, @{n='Name';e={$_.FullName -replace "(.:.*)", "*($(str($($_.FullName)).Substring(4)))*"}} | Out-String -Width 40960 ``` I'm referring to the following expression ``` e={$_.FullName -replace "(.:.*)", "*($(str($($_.FullName)).Substring(4)))*"}} ``` The substring from the 4th character isn't replacing the Full Name of the path. The paths in question are longer than 4 characters. The output is just empty for the Full Name when I run the script. Can someone please help me out with the syntax **EDIT** The unaltered list of strings (as `Get-ChildItem` recurses) would be ``` D:\this\is\where\it\starts D:\this\is\where\it\starts\dir1\file1 D:\this\is\where\it\starts\dir1\file2 D:\this\is\where\it\starts\dir1\file3 D:\this\is\where\it\starts\dir1\dir2\file1 ``` The `$_.FullName` will therefore take on the value of each of the strings listed above. Given an input like *D:\this\is* or *D:\this\is\where*, then I'm computing the length of this input (including the delimiter `\`) and then replacing `$_.FullName` with a substring beginning from the nth position where *n* is the length of the input. If input is D:\this\is, then length is 10. Expected output is ``` \where\it\starts \where\it\starts\dir1\file1 \where\it\starts\dir1\file2 \where\it\starts\dir1\file3 \it\starts\dir1\dir2\file1 ```<issue_comment>username_1: When having trouble, simplify: This function will do what you are apparently trying to accomplish: ``` Function Remove-Parent { param( [string]$Path, [string]$Parent) $len = $Parent.length $Path.SubString($Len) } ``` The following is not the way you likely would use it but does demonstrate that the function returns the expected results: ``` @' D:\this\is\where\it\starts D:\this\is\where\it\starts\dir1\file1 D:\this\is\where\it\starts\dir1\file2 D:\this\is\where\it\starts\dir1\file3 D:\this\is\where\it\starts\dir1\dir2\file1 '@ -split "`n" | ForEach-Object { Remove-Parent $_ 'D:\This\Is' } # Outputs \where\it\starts \where\it\starts\dir1\file1 \where\it\starts\dir1\file2 \where\it\starts\dir1\file3 \where\it\starts\dir1\dir2\file1 ``` Just call the function with the current path ($\_.fullname) and the "prefix" you are expecting to remove. The function above is doing this strictly on 'length' but you could easily adapt it to match the actual string with either a string replace or a regex replace. ``` Function Remove-Parent { param( [string]$Path, [string]$Parent ) $remove = [regex]::Escape($Parent) $Path -replace "^$remove" } ``` The output was the same as above. Upvotes: 0 <issue_comment>username_2: If you want to remove a particular prefix from a string you can do so like this: ``` $prefix = 'D:\this\is' ... $_.FullName -replace ('^' + [regex]::Escape($prefix)) ``` To remove a prefix of a given length you can do something like this: ``` $len = 4 ... $_.FullName -replace "^.{$len}" ``` Upvotes: 2 [selected_answer]
2018/03/14
551
1,966
<issue_start>username_0: I need to select row with distinct value of column A and with minimal value of Order column in MSSQL DB. Each record has more columns on the right hand side. I know I should used GROUP BY, but it does throw warnings, when I want to keep the right hand side columns a well. Data set example: ``` A | Order | Multiple columns ... | --------+-------+-------------------------+ A1 | 3 | ... | A1 | 7 | ... | A2 | 2 | ... | A3 | 2 | ... | A3 | 8 | ... | ``` So that I want to get these results: ``` A | Order | Multiple columns ... | --------+-------+-------------------------+ A1 | 3 | ... | A2 | 2 | ... | A3 | 2 | ... | ``` The query I tried to use and throws warning is this one: ``` SELECT A, MIN(Order), Column B, Column C, Column D... FROM Table GROUP BY A ORDER BY A ```<issue_comment>username_1: Probably the most efficient method (with the right indexes) is: ``` select t.* from t where t.order = (select min(t2.order) from t t2 where t2.a = t.a); ``` The more common approach is to use `row_number()`, which I also recommend. Upvotes: 2 <issue_comment>username_2: You could also use *top (1) with ties* ``` select top (1) with ties a, * from table t order by row_number() over (partition by a order by [Order]) ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: If your dataset isn't so large that a CTE is out of the question. ``` ; WITH CTE1 AS ( SELECT A , Order , B , C , D , RowNumber = ROW_NUMBER() OVER (PARTITION BY A ORDER BY Order ASC) FROM Table ) SELECT A , Order , B , C , D WHERE (RowNumber = 1) ``` Upvotes: 0
2018/03/14
1,179
4,211
<issue_start>username_0: I'm using Contact form 7 pugin in Wordpress. I need to populate a Select field with "crops" that are in a MySQL DB, and depending of the chosen one, a second Select has to be populated with this crop's "variety. I know that using CF7 conditional fields it is possible, but there are 53 crops and more than 400 variety. So, it would be a lot of fields in DB and in the contact form. I think it should be done using ajax, but I have no idea about this, and I can not find any example to learn using ajax + CF7 + Wordpress. What I would like to do is make a query to MySQL (`Select variedad from cultivos where cultivo = 'id_cultivo'`) and populate the select with the result. Does anyone know how I can do this?<issue_comment>username_1: > > I need to populate a Select field with "crops" that are in a MySQL DB > > > Use a dynamic dropdown field from the [Smart Grid plugin extension for CF7](https://wordpress.org/plugins/cf7-grid-layout/). It allows you to populate your dropdown using either a set of existing post or a taxonomy, else iit also has the ability to populate using a hook, so you can build a custom sql if your 'crops' are in a custom table. > > and depending of the chosen one, a second Select has to be populated with this crop's "variety. > > > so this is trickier. The dynamic dropdown populated using a taxonomy allows you to build a jquery [select2](https://select2.org/) dropdown which makes it much easier to search through long lists of options, furthermore, if your taxonomy is hierarchical (2 level deep), it will use the parent level terms as option groups in the dowdown and actual options will be the child terms. Hence you could organise your 'crops' and 'varieties' as a taxonomy of parent->child terms. Crops that have no variety would simply have itself as a variety, and your dropdown would allow users to select a variety within a specific category. This would make it simpler to setup on your form, without the requirement of hidden dropdowns. It is also a lot more scalable & maintainable, as whenever you update your crop/variety terms, it is dynamically reflected in the form without having to change your form. I hope all the above makes sense. If you wish to explore this further, you can post on the plugin support forum. Upvotes: 2 [selected_answer]<issue_comment>username_2: I have done the following lines in order to solve my trouble: In js file: ``` jQuery(document).ready(function($) { $("#idsubfamilia").change(function(e) { e.preventDefault(); jQuery.post(MyAjax.url, {action : 'buscar_posts' ,cadena : $('#idsubfamilia').val() }, function(response) { // $('#variedad_container').hide().html(response).fadeIn(); $('#variedad_container').hide().html(response).fadeIn(); }); }); }); ``` In functions.php (thanks to <http://masquewordpress.com/como-utilizar-ajax-correctamente-en-wordpress/>): ``` /* AJAX */ // Primero incluimos nuestro archivo javascript definido anteriormente wp_enqueue_script( 'mi-script-ajax',get_bloginfo('stylesheet_directory') . '/js/ajax-search.js', array( 'jquery' ) ); // ahora declaramos la variable MyAjax y le pasamos el valor url (wp-admin/admin-ajax.php) al script ajax-search.js wp_localize_script( 'mi-script-ajax', 'MyAjax', array( 'url' => admin_url( 'admin-ajax.php' ) ) ); //Para manejar admin-ajax tenemos que añadir estas dos acciones. //IMPORTANTE!! Para que funcione reemplazar "buscar_posts" por vuestra action definida en ajax-search.js add_action('wp_ajax_buscar_posts', 'buscar_posts_callback'); add_action('wp_ajax_nopriv_buscar_posts', 'buscar_posts_callback'); function buscar_posts_callback() { global $wpdb; $subfamilia=$_POST['cadena']; //$datos = $wpdb->get_results('SELECT id, subfamilia_' . $idioma . ' FROM wp_custom_subfamilias ORDER BY subfamilia_' . $idioma); $datos = $wpdb->get_results("SELECT variedad FROM wp_custom_variedad where subfamilia_fk='".$subfamilia."' ORDER BY variedad"); echo ''; echo ''; foreach ($datos as $dato) { echo "".$dato->variedad.""; } echo ''; echo ' '; die(); // Siempre hay que terminar con die } ``` Upvotes: 0
2018/03/14
362
1,458
<issue_start>username_0: I'm using Android Data Binding library to bind an xml layout that has an layout.xml ``` xml version="1.0" encoding="utf-8"? ... ... ``` In the generated Databinding class for the xml I see this property: ``` @Nullable public final com.example.databinding.SomeOtherLayoutBinding includedLayout; ``` Why is it annotated as `@Nullable`? The is in the layout and as I see it, it is obviously non-null. What am I missing? It forces me to use non-null assertion operator `!!` in Kotlin code when accessing the fields of the included layout and I'm wondering if it is safe or if there is something I'm not considering here ``` val binder = DataBindingUtil.bind(view) val someView = binder.includedLayout!!.someView ```<issue_comment>username_1: For the latest version of databinding compiler (3.1.0) to resolve that issue with nullable bindings of included layouts you could set `android.databinding.enableV2=true` in **gradle.properties** file inside your project. After that you need invoke rebuild. After that all included layout bindings will be marked with `@NonNull` annotation. Upvotes: 1 <issue_comment>username_2: According to the Documentation on View Binding, when you have multiple layouts for configuration changes, if the view is only present in some configurations, the binding class will be marked as nullable. [View Binding Docs](https://developer.android.com/topic/libraries/view-binding) Upvotes: 3
2018/03/14
351
1,172
<issue_start>username_0: I Have some strings: ``` 1:2:3:4:5 2:3:4:5 5:3:2 6:7:8:9:0 ``` How to find strings that have exact numbers of colons? Example I need to find strings where 4 colons. Result: `1:2:3:4:5` and `6:7:8:9:0` `Edit:` No matter what text between colons, it may so: ``` qwe:::qwe: :998:qwe:3ee3:00 ``` I have to specify a `number` of colons, but using `regexp_matches`. It something like filter to search broken strings. Thanks.<issue_comment>username_1: For the latest version of databinding compiler (3.1.0) to resolve that issue with nullable bindings of included layouts you could set `android.databinding.enableV2=true` in **gradle.properties** file inside your project. After that you need invoke rebuild. After that all included layout bindings will be marked with `@NonNull` annotation. Upvotes: 1 <issue_comment>username_2: According to the Documentation on View Binding, when you have multiple layouts for configuration changes, if the view is only present in some configurations, the binding class will be marked as nullable. [View Binding Docs](https://developer.android.com/topic/libraries/view-binding) Upvotes: 3
2018/03/14
837
3,833
<issue_start>username_0: In My android application i have 2 fields one for dateOfBirth and another for dateOfMarriage. I want to keep the validation that the difference between 2 fields must be > 18years. Can anyone help? Thanks in Advance<issue_comment>username_1: ``` mycalendar=Calendar.getInstance(); final DatePickerDialog.OnDateSetListener date = new DatePickerDialog.OnDateSetListener() { @Override public void onDateSet(DatePicker view, int year, int monthOfYear, int dayOfMonth) { // TODO Auto-generated method stub SimpleDateFormat sdf = new SimpleDateFormat("yyyy/MM/dd",Locale.US); Date parse = null; Date current =null; Calendar calendar=Calendar.getInstance(); int current_year=calendar.get(Calendar.YEAR); int dates=calendar.get(Calendar.DAY_OF_MONTH); int hours=calendar.get(Calendar.HOUR_OF_DAY); int minute=calendar.get(Calendar.MINUTE); int months=calendar.get(Calendar.MONTH)+1; int seconds=calendar.get(Calendar.SECOND); TimeZone tz = TimeZone.getDefault(); String timezone = tz.getDisplayName(false, TimeZone.SHORT); String current_date=(""+current_year+"/"+months+"/"+dates+timezone); String month=mycalendar.getDisplayName(Calendar.MONTH, Calendar.SHORT, Locale.getDefault()); String day=mycalendar.getDisplayName(Calendar.DAY_OF_MONTH, Calendar.SHORT, Locale.getDefault()); mycalendar.set(Calendar.YEAR, year); mycalendar.set(Calendar.MONTH, monthOfYear); mycalendar.set(Calendar.DAY_OF_MONTH, dayOfMonth); String selected_date=(""+year+"/"+monthOfYear+"/"+dayOfMonth+timezone); try { current=sdf.parse(current_date); parse=sdf.parse(selected_date); } catch (ParseException e) { e.printStackTrace(); } long diff=current_year-year; if(diff==18){ if(dates>=dayOfMonth){ edit_login_date.setText(sdf.format(mycalendar.getTime())); date_of_birth = edit_login_date.getText().toString(); } else { Toast.makeText(AddCandidates.this, "Age should be at least 18 years ", Toast.LENGTH_SHORT).show(); } } else if(diff>18){ edit_login_date.setText(sdf.format(mycalendar.getTime())); date_of_birth = edit_login_date.getText().toString(); } else { Toast.makeText(AddCandidates.this, "Age should be at least 18 years ", Toast.LENGTH_SHORT).show(); } ``` Upvotes: -1 <issue_comment>username_2: ``` String dateStart = dob.getText().toString().trim(); String dateStop = dom.getText().toString().trim(); //Date format SimpleDateFormat format = new SimpleDateFormat("MM/dd/yyyy"); Date d1 = null; Date d2 = null; try { d1 = format.parse(dateStart); d2 = format.parse(dateStop); } catch (ParseException e) { e.printStackTrace(); } long diff = d2.getYear() - d1.getYear(); long year = diff/365; if (diff < 19){ System.err.println("Difference in number of years : " + diff); Toast.makeText(DIffdates.this, "DOB & marg should have atleast 18 years gap ", Toast.LENGTH_SHORT).show(); } ``` Upvotes: 1
2018/03/14
2,220
7,305
<issue_start>username_0: I'm looking for exactly this operation: [How do I duplicate rows based on cell contents (cell contains semi-colon seperated data)](https://stackoverflow.com/questions/44732943/how-do-i-duplicate-rows-based-on-cell-contents-cell-contains-semi-colon-seperat) But with an added column: [Starting table vs End result](https://i.stack.imgur.com/abhnv.jpg) What I have: ``` | Name | Size | Photo | |--------|------------|---------| | Tshirt | 10, 12, 14 | 144.jpg | | Jeans | 30, 40, 42 | 209.jpg | | Dress | 8 | 584.jpg | | Shoe | 6 | 178.jpg | ``` What I would like: ``` | Name | Size | Photo | Primary | |--------|------|---------|---------| | Tshirt | 10 | 144.jpg | 1 | | Tshirt | 12 | 144.jpg | 0 | | Tshirt | 14 | 144.jpg | 0 | | Jeans | 30 | 209.jpg | 1 | | Jeans | 40 | 209.jpg | 0 | | Jeans | 42 | 209.jpg | 0 | | Dress | 8 | 584.jpg | 1 | | Shoe | 6 | 178.jpg | 1 | ``` Right now the code I found works perfectly but I don't know how to add the "Primary" column. ``` Sub SplitCell() Dim cArray As Variant Dim cValue As String Dim rowIndex As Integer, strIndex As Integer, destRow As Integer Dim targetColumn As Integer Dim lastRow As Long, lastCol As Long Dim srcSheet As Worksheet, destSheet As Worksheet targetColumn = 2 'column with semi-colon separated data Set srcSheet = ThisWorkbook.Worksheets("Sheet1") 'sheet with data Set destSheet = ThisWorkbook.Worksheets("Sheet2") 'sheet where result will be displayed destRow = 0 With srcSheet lastRow = .Cells(.Rows.Count, "A").End(xlUp).Row lastCol = .Cells(1, .Columns.Count).End(xlToLeft).Column For rowIndex = 1 To lastRow cValue = .Cells(rowIndex, targetColumn).Value 'getting the cell with semi-colon separated data cArray = Split(cValue, ";") 'splitting semi-colon separated data in an array For strIndex = 0 To UBound(cArray) destRow = destRow + 1 destSheet.Cells(destRow, 1) = .Cells(rowIndex, 1) destSheet.Cells(destRow, 2) = Trim(cArray(strIndex)) destSheet.Cells(destRow, 3) = .Cells(rowIndex, 3) Next strIndex Next rowIndex End With End Sub ``` Thanks for your help!<issue_comment>username_1: Here is a slightly different approach, which avoids the second loop. ``` Sub SplitCell() Dim cArray As Variant Dim rowIndex As Long, destRow As Long Dim targetColumn As Long Dim lastRow As Long, lastCol As Long Dim srcSheet As Worksheet, destSheet As Worksheet targetColumn = 2 'column with semi-colon separated data Set srcSheet = ThisWorkbook.Worksheets("Sheet1") 'sheet with data Set destSheet = ThisWorkbook.Worksheets("Sheet2") 'sheet where result will be displayed destRow = 1 With srcSheet lastRow = .Cells(.Rows.Count, "A").End(xlUp).Row lastCol = .Cells(1, .Columns.Count).End(xlToLeft).Column destSheet.Cells(1, 4).Value = "Primary" For rowIndex = 1 To lastRow cArray = Split(srcSheet.Cells(rowIndex, targetColumn), ";") 'splitting semi-colon separated data in an array destSheet.Cells(destRow, 1).Resize(UBound(cArray) + 1).Value = srcSheet.Cells(rowIndex, targetColumn - 1).Value destSheet.Cells(destRow, 2).Resize(UBound(cArray) + 1).Value = Application.Transpose(cArray) destSheet.Cells(destRow, 3).Resize(UBound(cArray) + 1).Value = srcSheet.Cells(rowIndex, targetColumn + 1).Value If rowIndex > 1 Then destSheet.Cells(destRow, 4).Value = 1 If UBound(cArray) > 0 Then destSheet.Cells(destRow + 1, 4).Resize(UBound(cArray)).Value = 0 End If destRow = destSheet.Cells(Rows.Count, 1).End(xlUp).Row + 1 Next rowIndex End With End Sub ``` Upvotes: 0 <issue_comment>username_2: Note: I am using this "," delimiter as your data shows that rather than your code which is using ";". Simply swop if necessary. ``` Option Explicit Sub SplitCell() Dim cArray As Variant Dim cValue As String Dim rowIndex As Long, strIndex As Long, destRow As Long Dim targetColumn As Long Dim lastRow As Long, lastCol As Long Dim srcSheet As Worksheet, destSheet As Worksheet targetColumn = 2 'column with semi-colon separated data Set srcSheet = ThisWorkbook.Worksheets("Sheet1") 'sheet with data Set destSheet = ThisWorkbook.Worksheets("Sheet2") 'sheet where result will be displayed destRow = 0 With srcSheet lastRow = .Cells(.Rows.Count, "A").End(xlUp).Row lastCol = .Cells(1, .Columns.Count).End(xlToLeft).Column For rowIndex = 1 To lastRow cValue = .Cells(rowIndex, targetColumn).Value 'getting the cell with semi-colon separated data cArray = Split(cValue, ",") 'splitting semi-colon separated data in an array For strIndex = 0 To UBound(cArray) destRow = destRow + 1 destSheet.Cells(destRow, 1) = .Cells(rowIndex, 1) destSheet.Cells(destRow, 2) = Trim(cArray(strIndex)) destSheet.Cells(destRow, 3) = .Cells(rowIndex, 3) If rowIndex = 1 Then destSheet.Cells(destRow, 4) = "Primary" Else If strIndex = 0 Then destSheet.Cells(destRow, 4) = 1 Else destSheet.Cells(destRow, 4) = 0 End If End If Next strIndex Next rowIndex End With End Sub ``` Upvotes: 0 <issue_comment>username_3: Try this slight modification of your code, you'll have to declare additional variable `Dim priority As Boolean`: ``` For rowIndex = 1 To lastRow cValue = .Cells(rowIndex, targetColumn).Value 'getting the cell with semi-colon separated data cArray = Split(cValue, ";") 'splitting semi-colon separated data in an array priority = True For strIndex = 0 To UBound(cArray) destRow = destRow + 1 destSheet.Cells(destRow, 1) = .Cells(rowIndex, 1) destSheet.Cells(destRow, 2) = Trim(cArray(strIndex)) destSheet.Cells(destRow, 3) = .Cells(rowIndex, 3) destSheet.Cells(destRow, 4) = IIf(priority, 1, 0) priority = False Next strIndex Next rowIndex ``` Upvotes: 3 [selected_answer]<issue_comment>username_4: your whole sub can boil down to: ``` Sub SplitCell() Dim vals As Variant vals = ThisWorkbook.Worksheets("Sheet001").Range("A1").CurrentRegion.value Dim iVal As Long With ThisWorkbook.Worksheets("Sheet002") .Range("A1:C1").value = Application.index(vals, 1, 0) .Range("D1").value = "Primary" For iVal = 2 To UBound(vals) With .Cells(.Rows.Count, 1).End(xlUp).Offset(1).Resize(UBound(Split(vals(iVal, 2) & ",", ","))) .Offset(, 0).value = vals(iVal, 1) .Offset(, 1).value = Application.Transpose(Split(vals(iVal, 2) & ",", ",")) .Offset(, 2).value = vals(iVal, 3) .Offset(, 3).value = Application.Transpose(Split("1," & String(.Rows.Count - 1, ","), ",")) End With Next .Range("D1", .Cells(.Rows.Count, 1).End(xlUp)).SpecialCells(xlCellTypeBlanks).value = 0 End With End Sub ``` Upvotes: 0
2018/03/14
688
2,611
<issue_start>username_0: I'm using one module AJAX `d_quickcheckout` for faster checkout page on `opencart 2.1` (not the default one). The problem is with one field on `payment address` section not to be selected by `default`, this is `region/state` field. At the moment the field has the region/state where the store is located. Even if I remove the field, this `region/state` doesn't show at the checkout page but is shown on `invoice`! I want this field to be like `--Select State--` or with default `value="0"` and `$text_none` These are the two code blocks that I think I must change: HTML ``` php foreach ($addresses as $address) { ? > php echo $address['firstname']; ? php echo $address['lastname']; ?, php echo $address['address\_1']; ?, php echo $address['city']; ?, php echo $address['zone']; ?, php echo $address['country']; ? php } ? ``` AJAX: ``` function refreshPaymentAddessZone(value) { $.ajax({ url: 'index.php?route=module/quickcheckout/country&country_id=' + value, dataType: 'json', beforeSend: function() { }, complete: function() { }, success: function(json) { if (json['postcode_required'] == '1') { $('#payment-postcode-required').show(); } else { $('#payment-postcode-required').hide(); } html = 'php echo $text\_select; ?'; if (json['zone'] != '') { for (i = 0; i < json['zone'].length; i++) { html += '') { html += ' selected="selected"'; } html += '>' + json['zone'][i]['name'] + ''; } } else { html += 'php echo $text\_none; ?'; } $('#payment_address_wrap select[name=\'payment_address[zone_id]\']').html(html); }, error: function(xhr, ajaxOptions, thrownError) { console.log(thrownError + "\r\n" + xhr.statusText + "\r\n" + xhr.responseText); } }); } ```<issue_comment>username_1: You can comment the $.ajax call and the dropdown will be empty all the time. Upvotes: 1 [selected_answer]<issue_comment>username_2: You can try this code block, instead of your currect select: ``` -- Select State -- php foreach ($addresses as $address) { ? php echo $address['firstname']; ? php echo $address['lastname']; ?, php echo $address['address\_1']; ?, php echo $address['city']; ?, php echo $address['zone']; ?, php echo $address['country']; ? php } ? ``` If this isn't sufficient, remove the AJAX-call as well. Upvotes: 1
2018/03/14
689
2,738
<issue_start>username_0: I call delete in destructor but it say: undentifier "data" is undefined! Shouldn't delete work in the destructor? ``` struct Coada { Coada(int size_max=0) { int prim = -1; int ultim = -1; int *data = new int[size_max]; } ~Coada() { delete[] data; } }; ```<issue_comment>username_1: To delete a pointer, the value of the pointer has to be stored until the point of deletion. Since the pointer `data` only exists until the constructor returns, and no copies are made of the pointer value, the pointer cannot be deleted after the constructor returns. Since it wasn't deleted before that, the allocated memory will have leaked. Furthermore, a variable cannot be accessed outside of its scope. `data` is a local variable of the constructor, and cannot be accessed outside of that function. There is no variable `data` in the destructor; hence the error from your compiler. So, if you do allocate something in a function, and don't wish to deallocate it within that function, you must store the pointer somewhere. Since the function where you allocate is a constructor, it would be natural to store the pointer in a member variable. The destructor can access member variables and would therefore be able to delete the pointer. However, keep in mind that it is extremely rare for C++ programmer to need to do manual memory management. It should be avoided as much as possible. For example, in this case, it would be smart to use `std::vector` to allocate a dynamically sized array. Upvotes: 1 <issue_comment>username_2: Everything the others say is correct and you should follow it. To answer your question. Yes, delete should work in destructor if you do it right. Here is an example how it will work: ``` struct Coada{ Coada(int size_max=0){ int prim = -1; int ultim = -1; data = new int[size_max]; } ~Coada(){ delete[] data; } private: int* data; }; ``` You can see I declare `data` as member variable of struct Coada so I can access it everywhere in this struct, also in destructor. But all of them you will learn in a good c++ book. Enjoy reading :) Upvotes: 0 <issue_comment>username_3: This should work using a class for your object : ``` class Coada { private: int prim; int ultim; int *data; public: Coada(int size_max=0) { this->prim = -1; this->ultim = -1; this->data = new int[size_max]; } ~Coada() { delete[] this->data; } }; int main(void) { Coada my_coada(4); return 0; } ``` Upvotes: 1 [selected_answer]
2018/03/14
834
2,897
<issue_start>username_0: So for input: ``` <NAME> ``` I Should get output: ``` Arrondissement de Boulogne-sur-Mer Arrondissement Den Bosch ``` So it should give back both results. So in below code I've capitalized every first character of the word but this isn't correct because some words do not start with an upper case. ``` public ArrayList getAllCitiesThatStartWithLetters(String letters) { ArrayList filteredCities = new ArrayList<>(); if (mCities != null) { for (City city : mCities) { if (city.getName().startsWith(new capitalize(letters))) { filteredCities.add(city); } } } return filteredCities; } public String capitalize(String capString){ StringBuffer capBuffer = new StringBuffer(); Matcher capMatcher = Pattern.compile("([a-z])([a-z]\*)", Pattern.CASE\_INSENSITIVE).matcher(capString); while (capMatcher.find()){ capMatcher.appendReplacement(capBuffer, capMatcher.group(1).toUpperCase() + capMatcher.group(2).toLowerCase()); } return capMatcher.appendTail(capBuffer).toString(); } ```<issue_comment>username_1: You can use [`String::matches`](https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#matches-java.lang.String-) with this regex `(?i)searchWord.*` not the `(?i)` which mean **case-insensitive** : ``` String searchWord = "<NAME>"; String regex = "(?i)" + Pattern.quote(searchWord) + ".*"; boolean check1 = "Arrondissement de Boulogne-sur-Mer".matches(regex); //true boolean check2 = "Arrondissement Den Bosch".matches(regex); //true ``` I used Use [`Pattern::quote`](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#quote-java.lang.String-) to escape the special characters in your input. Upvotes: 0 <issue_comment>username_2: `String` has a very useful [`regionMatches`](https://docs.oracle.com/javase/7/docs/api/java/lang/String.html#regionMatches(boolean,%20int,%20java.lang.String,%20int,%20int)) method with an `ignoreCase` parameter, so you can check if a region of a string matches another string case insensitively. ``` String alpha = "My String Has Some Capitals"; String beta = "my string"; if (alpha.regionMatches(true, 0, beta, 0, beta.length())) { System.out.println("It matches"); } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Instead of using a regex for your comparison, use the `String.toLowerCase()` so that both strings will be in lowercase, removing the need to account for upper-case values. use `String.split(" ")` to turn your phrase into an array of strings, and you will easily be able to turn the first character of each index into an upper-case letter. Re-combine your array of strings to form your phrase, and there you have it. Upvotes: 0 <issue_comment>username_4: You can use apache String utils: ``` import org.apache.commons.lang.StringUtils; ... StringUtils.startsWithIgnoreCase(city.getName(), letters) ``` Upvotes: 1
2018/03/14
1,256
4,795
<issue_start>username_0: ``` .MODEL TINY Kod SEGMENT ORG 100h/256 ASSUME CS:Kod, DS:Tekst, SS:Stosik Start: jmp Petla Tekst DD napis, '$' Poczatek: mov bl, napis Petla: cmp ah, '$' mov al, [bx] jne Wyswietlenie inc bh mov [bx], ax cmp al, '$' mov [bx - 1], ax je Wyswietlenie mov [bx], bl dec bl jmp Petla Wyswietlenie: mov ah, 09h mov dx, OFFSET Tekst int 21h mov ax, 4C70h int 21h ENDPRG Poczatek KOD ENDS ``` the error in the DOS is "**Fatal** program.asm(56) Unexpected end of file encounter". The program should change letters in the word. Any suggests what can I do? I don't know what to edit to make it even lunch in DOS to check it step by step in the debugger.<issue_comment>username_1: Without knowing what the original code you were given to fix is,and what the program is suppose to do I can tell you how to fix the issues that will allow you to at least compile and link this as a DOS COM program. I know that this assignment has a header (comments) that you have removed so I don't know what the program is suppose to do. If you provided the original assignment (including the header) in an update your question I might be able to help you further. As it stands, with a DOS COM program you don't create `SEGMENTS` like a DOS EXE. So you have to remove the `kod SEGMENT`, `ASSUME CS:Kod, DS:Tekst, SS:Stosik`, `KOD ENDS`. You will have to place a `.code` directive after `.model TINY` and set the origin point with `org 100h`. A COM program needs an entry point. The entry point is `Start`. You need to end a COM program with an `END` statement that has the name of entry point in it. So the end of your file needs `END Start`. The line `Tekst DD napis, '$'` needs to be `Tekst DB "napis", '$'`. Strings are created with `DB` (byte) directive and the string needs to be enclosed in quotes. The line `mov bl, napis` needs to move the offset (address) of `Tekst` to BX, not `napis`, so it should be `mov bx, offset Tekst` The code to get you started so that you can at least assemble and link can look like this: ``` .MODEL TINY .code ORG 100h Start: jmp Poczatek Tekst DB "napis", '$' Poczatek: mov bx, offset Tekst Petla: cmp ah, '$' mov al, [bx] jne Wyswietlenie inc bh mov [bx], ax cmp al, '$' mov [bx - 1], ax je Wyswietlenie mov [bx], bl dec bl jmp Petla Wyswietlenie: mov ah, 09h mov dx, OFFSET Tekst int 21h mov ax, 4C70h int 21h END Start ``` You should be able to use the turbo debugger to run and test the program and fix the logical errors that I can't help you with given the information provided. --- I suspect from the code that the intent is to swap each pair of characters until the end of string is found. If that is the case then the main part of the code would probably be this: ``` Start: jmp Poczatek Tekst DB "napis", '$' Poczatek: mov bx, offset Tekst Petla: mov ah, [bx] cmp ah, '$' je Wyswietlenie inc bx mov al, [bx] cmp al, '$' je Wyswietlenie mov [bx - 1], al mov [bx], ah inc bx jmp Petla ``` Upvotes: 2 <issue_comment>username_2: First and foremost, if you have used START: ------ ;then you have to end it also and to end it please write END START --------- ; at the end of code Further, the assembler NEEDS END directive as the END OF FILE command. So, as we saw that **"END START**" is to be included in the code. But where? The answer is that assembler always looks for **END** directive for END OF FILE command. So, include **END START** as the last line of code. Note: FATAL ERROR ,UNEXPECTED END OF FILE error will be solved by this, but I have not debugged other possible errors if there Upvotes: 0
2018/03/14
552
1,777
<issue_start>username_0: I'm trying to move the title upper inside of the pic.I'm created an div but it doesn't appear correctly.Then I remove.I changed to code of the existing div. [![This is what I want.](https://i.stack.imgur.com/kbq9U.png)](https://i.stack.imgur.com/kbq9U.png) but show [![this](https://i.stack.imgur.com/UPYUg.png)](https://i.stack.imgur.com/UPYUg.png). I tried margin-top but not work. CSS ``` .lazy-enabled #main-content .first-news img { height: 310px; } .lazy-enabled #main-content .first-news .tie-appear.post-thumbnail a{ background-color: red !important; display: table-cell; font-size: 25px; color: white; } ``` Div PHP ``` [php the\_post\_thumbnail( 'tie-medium' ); ?](<?php the_permalink(); ?>) [php the\_title(); ?](<?php the_permalink(); ?>) ------------------------------------------------ php endif; ? .lazy-enabled #main-content .first-news .tie-appear.post-thumbnail a{ background-color: red !important; display: table-cell; } ```<issue_comment>username_1: Use `position: absolute` to position it at the top of relatively positioned div Upvotes: 0 <issue_comment>username_2: You need the thumbnail wrapper to be `position: relative`, and the title to be `position: absolute`. Then you can put it on the bottom of the wrapper and add whatever margins you require. ```css .post-thumbnail { position: relative; } .post-thumbnail .post-box-title { position: absolute; bottom: 0; margin: 20px; } .post-thumbnail .post-box-title a { background-color: red !important; display: table-cell; font-size: 25px; color: white; } ``` ```html [![](https://www.w3schools.com/howto/img_fjords.jpg)](#) [Post Title](#) --------------- ``` Upvotes: 2 [selected_answer]
2018/03/14
578
1,727
<issue_start>username_0: An irregular time series `data` is stored in a `pandas.DataFrame`. A `DatetimeIndex` has been set. I need the time difference between consecutive entries in the index. I thought it would be as simple as ``` data.index.diff() ``` but got ``` AttributeError: 'DatetimeIndex' object has no attribute 'diff' ``` I tried ``` data.index - data.index.shift(1) ``` but got ``` ValueError: Cannot shift with no freq ``` I do not want to infer or enforce a frequency first before doing this operation. There are large gaps in the time series that would be expanded to large runs of `nan`. The point is to find these gaps first. So, what is a clean way to do this seemingly simple operation?<issue_comment>username_1: There is no implemented `diff` function yet for index. However, it is possible to convert the index to a `Series` first by using [`Index.to_series`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.to_series.html), if you need to preserve the original index. Use the `Series` constructor with no index parameter if the default index is needed. Code example: ``` rng = pd.to_datetime(['2015-01-10','2015-01-12','2015-01-13']) data = pd.DataFrame({'a': range(3)}, index=rng) print(data) a 2015-01-10 0 2015-01-12 1 2015-01-13 2 a = data.index.to_series().diff() print(a) 2015-01-10 NaT 2015-01-12 2 days 2015-01-13 1 days dtype: timedelta64[ns] a = pd.Series(data.index).diff() print(a) 0 NaT 1 2 days 2 1 days dtype: timedelta64[ns] ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: This question is a bit old but anyway... I use `numpy.diff(data.index)` to get the time deltas. Working fine. Upvotes: 1
2018/03/14
761
2,340
<issue_start>username_0: While reading through the Crystal docs, I came across this line: ``` deq = Deque{2, 3} ``` So I think this calls the `Deque.new(array : Array(T))` constructor. However, I did not find any documentation about this syntax whatsoever. (**EDIT**: [The documentation can be found here](https://crystal-lang.org/docs/syntax_and_semantics/literals/array.html)) To test this way of calling constructors, I wrote the following test ``` class Foo(T) def initialize(@bar : Array(T)); end def to_s(io : IO) io << "Foo: " io << @bar end end puts Foo{1} # Line 10 ``` But, compiling it prints this error: ``` Error in line 10: wrong number of arguments for 'Foo(Int32).new' (given 0, expected 1) Overloads are: - Foo(T).new(bar : Array(T)) ``` Which I really don't understand at all. `Foo(Int32){1}` Raises the same error. Question is, what is this `Klass{1, 2, 3}` syntax? And how do you use it?<issue_comment>username_1: They are documented here: <https://crystal-lang.org/docs/syntax_and_semantics/literals/array.html> --- Array-like Type Literal ----------------------- Crystal supports an additional literal for arrays and array-like types. It consists of the name of the type followed by a list of elements enclosed in curly braces (`{}`) and individual elements separated by a comma (`,`). ``` Array{1, 2, 3} ``` This literal can be used with any type as long as it has an argless constructor and responds to `<<`. ``` IO::Memory{1, 2, 3} Set{1, 2, 3} ``` For a non-generic type like `IO::Memory`, this is equivalent to: ``` array_like = IO::Memory.new array_like << 1 array_like << 2 array_like << 3 ``` For a generic type like `Set`, the generic type `T` is inferred from the types of the elements in the same way as with the array literal. The above is equivalent to: ``` array_like = Set(typeof(1, 2, 3)).new array_like << 1 array_like << 2 array_like << 3 ``` The type arguments can be explicitly specified as part of the type name: ``` Set(Number) {1, 2, 3} ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: To create such a type you need to define an argless contructor and `#<<` method: ``` class Foo(T) def initialize @store = Array(T).new end def <<(elem : T) @store << elem end end foo = Foo(Int32){1, 2, 3} p foo #=> # ``` Upvotes: 2
2018/03/14
699
2,635
<issue_start>username_0: Here, I am creating Angular 2 application with distributed database and micro-service architecture. In my application scenario, the application condition is that we are having normal user functionality as well as admin functionality like modify, delete organization attributes etc. Suppose, a normal user is trying to log in, he will get redirected to application functionality with its routing. But, if an admin user is trying to log into the application, he should have the choice of admin functionality or normal application functionality. For achieving this, I am thinking of two approaches: * Approach: Create two separate Projects (one for application functionality and other for admin functionality), so that admins can have a URL for both and he can access any one of them at will * Approach: Thinking to build Role based architecture using route guards in single application only, and activate admin functionality page whenever admin will be logged-in But, confusion regarding security of application. Can my second approach gives security that hackers could not hack my admin rights through that page as it will be part of the same application? Which one will be the more suitable approach?<issue_comment>username_1: They are documented here: <https://crystal-lang.org/docs/syntax_and_semantics/literals/array.html> --- Array-like Type Literal ----------------------- Crystal supports an additional literal for arrays and array-like types. It consists of the name of the type followed by a list of elements enclosed in curly braces (`{}`) and individual elements separated by a comma (`,`). ``` Array{1, 2, 3} ``` This literal can be used with any type as long as it has an argless constructor and responds to `<<`. ``` IO::Memory{1, 2, 3} Set{1, 2, 3} ``` For a non-generic type like `IO::Memory`, this is equivalent to: ``` array_like = IO::Memory.new array_like << 1 array_like << 2 array_like << 3 ``` For a generic type like `Set`, the generic type `T` is inferred from the types of the elements in the same way as with the array literal. The above is equivalent to: ``` array_like = Set(typeof(1, 2, 3)).new array_like << 1 array_like << 2 array_like << 3 ``` The type arguments can be explicitly specified as part of the type name: ``` Set(Number) {1, 2, 3} ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: To create such a type you need to define an argless contructor and `#<<` method: ``` class Foo(T) def initialize @store = Array(T).new end def <<(elem : T) @store << elem end end foo = Foo(Int32){1, 2, 3} p foo #=> # ``` Upvotes: 2
2018/03/14
693
2,079
<issue_start>username_0: My table: ``` id | request | subject | date 1 | 5 | 1 | 576677 2 | 2 | 3 | 576698 3 | 5 | 1 | 576999 4 | 2 | 3 | 586999 5 | 2 | 7 | 596999 ``` Need to select unique records by two columns(request,subject). But if we have different pairs of request-subject(2-3, 2-7), this records should be excluded from resulted query. My query now is: ``` SELECT MAX(id), id, request, subject, date FROM `tbl` GROUP BY request, subject having count(request) > 1 order by MAX(id) desc ``` How to exclude record with id=4, id=5 from this query? Thanks!<issue_comment>username_1: They are documented here: <https://crystal-lang.org/docs/syntax_and_semantics/literals/array.html> --- Array-like Type Literal ----------------------- Crystal supports an additional literal for arrays and array-like types. It consists of the name of the type followed by a list of elements enclosed in curly braces (`{}`) and individual elements separated by a comma (`,`). ``` Array{1, 2, 3} ``` This literal can be used with any type as long as it has an argless constructor and responds to `<<`. ``` IO::Memory{1, 2, 3} Set{1, 2, 3} ``` For a non-generic type like `IO::Memory`, this is equivalent to: ``` array_like = IO::Memory.new array_like << 1 array_like << 2 array_like << 3 ``` For a generic type like `Set`, the generic type `T` is inferred from the types of the elements in the same way as with the array literal. The above is equivalent to: ``` array_like = Set(typeof(1, 2, 3)).new array_like << 1 array_like << 2 array_like << 3 ``` The type arguments can be explicitly specified as part of the type name: ``` Set(Number) {1, 2, 3} ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: To create such a type you need to define an argless contructor and `#<<` method: ``` class Foo(T) def initialize @store = Array(T).new end def <<(elem : T) @store << elem end end foo = Foo(Int32){1, 2, 3} p foo #=> # ``` Upvotes: 2
2018/03/14
1,065
3,936
<issue_start>username_0: I am building a simple sign up form using ajax when I creating a data object and pass to PHP file.It shows variables and doesn't show values of that PHP variable. The code of HTML of form is ``` Name \* Institute Name \* Register » ``` The code of js is ``` $(document).ready(function(){ var form=$("#myForm").serialize(); $("#button").click(function(){ $.ajax({ type:"POST", url: "mainlogic.php", data:form, success: function(result){ alert(result); } }); }); }) ``` The code of PHP is (mainlogic.php) ``` if(isset($_POST)) { print_r($_POST);//////varaibles having null values if it is set $name=trim($_POST['name']); echo $name; } ```<issue_comment>username_1: Alpadev got the right answer, but here are a few leads that can help you in the future: **ajax** You should add the below `error` coding in your Ajax call, to display information if the request got a problem: ``` $.ajax({ […] error: function(jqXHR, textStatus, errorThrown){ // Error handling console.log(form); // where “form” is your variable console.log(jqXHR); console.log(textStatus); console.log(errorThrown); } }); ``` **$\_POST** $\_POST refers to all the variables that are passed by the page to the server. You need to use a variable name to access it in your php. See there for details about $\_POST: <http://php.net/manual/en/reserved.variables.post.php> `print_r($_POST);` should output the array of all the posted variables on your page. Make sure that: ⋅ The Ajax request ended correctly, ⋅ The `print_r` instruction is not conditioned by something else that evaluates to false, ⋅ The array is displayed in the page, not hidden by other elements. (You could take a look at the html source code instead of the output page to be sure about it.) Upvotes: 0 <issue_comment>username_2: You are serializing your form on document load. At this stage, the form isn't filled yet. You should serialize your form inside your button click event handler instead. ``` $(document).ready(function(){ $("#button").click(function(){ var form=$("#myForm").serialize(); $.ajax({ type:"POST", url: "mainlogic.php", data:form, success: function(result){ alert(result); } }); }); }) ``` Upvotes: 2 <issue_comment>username_3: > > > ``` > var form = $("#myForm").serialize(); > > ``` > > That is the line that collects the data from the form. You have it immediately after `$(document).ready(function() {` so you will collect the data **as soon as the DOM is ready**. This won't work because it is **before the user has had a chance to fill in the form**. You need to collect the data from the form **when the button is clicked**. Move that line inside the click event handler function. Upvotes: 0 <issue_comment>username_4: In this code you serialize blank form, just after document is ready: ``` $(document).ready(function(){ var form=$("#myForm").serialize(); $("#button").click(function(){ $.ajax({ type:"POST", url: "mainlogic.php", data:form, success: function(result){ alert(result); } }); }); }) ``` Valid click function should begins like: ``` $("#button").click(function(){ var form=$("#myForm").serialize(); $.ajax({... ``` It means - serialize form right after button clicked. Upvotes: 1 <issue_comment>username_5: The problem is that you calculate the form values at the beginning when loading the page when they have no value yet. You have to move the variable form calculation inside the button binding. ``` $(document).ready(function(){ $("#button").click(function(){ var form=$("#myForm").serialize(); $.ajax({ type:"POST", url: "mainlogic.php", data:form, success: function(result){ alert(result); } }); }); }) ``` Upvotes: 0
2018/03/14
387
1,363
<issue_start>username_0: Parsing data from a dataset where some images are not available so I want to create a new row `exists` so I can loop through the image names which are `.jpg` to put there False or True. Getting a unicode error ``` import pandas as pd from pandas import Series train = pd.read_csv('train.csv') In [16]: train['exists'] = Series(str(os.path.isfile('training_images/' + train['id'] + '.jpg'))) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 train['exists'] = Series(str(os.path.isfile('training\_images/' + train['id'] + '.jpg'))) /usr/lib/python2.7/genericpath.pyc in isfile(path) 35 """Test whether a path is a regular file""" 36 try: ---> 37 st = os.stat(path) 38 except os.error: 39 return False TypeError: coercing to Unicode: need string or buffer, Series found ```<issue_comment>username_1: I recommend you use a vectorised solution, as below: ``` train['filename'] = 'training_images' + os.sep + train['id'] + '.jpg' train['exists'] = train['filename'].map(os.path.isfile) ``` The result will be a Boolean `pd.Series`. Upvotes: 2 <issue_comment>username_2: You can use apply to do this ``` train['exists'] = train['id'].apply(lambda x: os.path.isfile('training_images/' + x + '.jpg')) ``` Upvotes: 0
2018/03/14
214
892
<issue_start>username_0: I would like to know that whether we can receive scheduled local notification from iphone device and we can receive that notification on Apple watch. I also want to update the status in my local database from that notification. Note: The app is local, it's not remote and have local database in which we have to update the status of that notification. Thanks in advance.<issue_comment>username_1: Check following table from Apple and [Link](https://developer.apple.com/library/content/documentation/General/Conceptual/WatchKitProgrammingGuide/BasicSupport.html) [![enter image description here](https://i.stack.imgur.com/BmhEg.png)](https://i.stack.imgur.com/BmhEg.png) Upvotes: 1 <issue_comment>username_2: It is not possible at the moment. Unfortunately, the only way to receive notifications on Apple Watch at the moment is to have the iPhone locked. Upvotes: 0
2018/03/14
359
1,394
<issue_start>username_0: While trying to implement gravity in my "Donkey Kong" game, i ran into a problem. The jump movement works perfectly fine when Mario is on a platform. But however when he's falling from one platform to another, the collision with the next platform doesn't get detected and so mario goes through the platform. This is how my gravity logic works: The vertical velocity is checked every frame, if it's not equal to 0 then mario is moved (+ mario-y vVelocity). As long as there is no collision with a platform, the vVelocity is changed to (- vVelocity gravity). And when there is a collision with a platform, the vVelocity gets reset to 0. The problem with this is, the mario-y changes too much every frame, for example it can go from (100;100) to (100;90) when the vVelocity= -10, and if there is a platform in (100;95), the collision is not detected. How can i fix this? Thanks<issue_comment>username_1: Check following table from Apple and [Link](https://developer.apple.com/library/content/documentation/General/Conceptual/WatchKitProgrammingGuide/BasicSupport.html) [![enter image description here](https://i.stack.imgur.com/BmhEg.png)](https://i.stack.imgur.com/BmhEg.png) Upvotes: 1 <issue_comment>username_2: It is not possible at the moment. Unfortunately, the only way to receive notifications on Apple Watch at the moment is to have the iPhone locked. Upvotes: 0
2018/03/14
848
3,247
<issue_start>username_0: Running `docker-compose up -d` I got the following error: ``` Starting cr-redis ... Starting cr-rabbitmq ... Starting cr-rabbitmq ... error Starting cr-redis ... error Starting cr-mysql ... error ERROR: for cr-mysql Cannot start service mysql: container "ff36...1116": already exists ERROR: for rabbitmq Cannot start service rabbitmq: container "3b6c...0aba": already exists ERROR: for redis Cannot start service redis: container "e84f...df91": already exists ERROR: for mysql Cannot start service mysql: container "ff36...1116": already exists ERROR: Encountered errors while bringing up the project. docker-compose ps Name Command State Ports ---------------------------------------------------------------------------------------------------------------------------------- cr-mysql docker-entrypoint.sh mysqld Exit 255 cr-php-fpm /bin/sh -c /usr/sbin/php-f ... Exit 255 9000/tcp cr-rabbitmq docker-entrypoint.sh rabbi ... Exit 255 cr-redis docker-entrypoint.sh redis ... Exit 255 cr-webserver nginx -g daemon off; Exit 255 0.0.0.0:15672->15672/tcp, 0.0.0.0:80->80/tcp, 0.0.0.0:9003->9003/tcp ``` How can I start again the container without recreating it? I just don't want to lose the data in the DB. --------------- UPDATE -------------------- ``` $ docker-compose stop $ docker-compose start Starting redis ... error Starting rabbitmq ... error Starting mysql ... error Starting php-fpm ... error Starting webserver ... error ERROR: for rabbitmq Cannot start service rabbitmq: container "3b6c...0aba": already exists ERROR: for mysql Cannot start service mysql: container "ff36...1116": already exists ERROR: for redis Cannot start service redis: container "e84f...f91": already exists ERROR: No containers to start ```<issue_comment>username_1: You need to use `docker-compose start` instead: ``` $ docker-compose start --help Start existing containers. Usage: start [SERVICE...] ``` Upvotes: 2 <issue_comment>username_2: Your case is probably related to a bug that will be fixed in the `18.03` release. Some workarounds are proposed here: * <https://github.com/docker/for-linux/issues/211> * <https://github.com/moby/moby/issues/36145> --- > > `docker-compose up` builds, (re)creates, starts, and attaches to containers for a service. > > > Since your `images` are built and the `containers` of your service have started, you can then use * `docker-compose stop` and * `docker-compose start` to start/stop your service. This is different from `docker-compose down` which: > > Stops containers and removes containers, networks, volumes, and images created by `up`. > > > --- Regarding the danger of losing your data if a container is removed, read about *persistent storage* and how to use *volumes*. Upvotes: 4 [selected_answer]
2018/03/14
709
2,141
<issue_start>username_0: I want to setup a cronjob for PHP script in ubuntu I enter this command in terminal ``` $ crontab -e ``` Then I choose nano editor which is recommended by ubuntu. Then I enter the blow line into that. Then I press control+C, it asking Y/N for save. I press Y and F2 for close. ``` * */2 * * * root php /var/www/html/script.php ``` Other things I've tried: ``` * */2 * * * /var/www/html/script.php * */2 * * * root /var/www/html/script.php ``` After that, I restart cron using the below command. ``` sudo /etc/init.d/cron restart ``` Then I check crontab list using `crontab -l`, it says no cron job set for the root user. I tried to directly create a crontab.txt file into the cron.hourly / cron.d directory with one of the above line. I tried numerous forum and all says `crontab -e` then enter or create crontab file inside cron directory. Nothing is helping me. I am scratching my head. What is the correct way to create cronjob for php script in ubuntu 16.04 & php version 7.0<issue_comment>username_1: Try like this to set crontab using root user, ``` sudo crontab -e ``` Do your changes via *nano* or *vim*. Finally save and quit ``` * */2 * * * /var/www/html/script.php * */2 * * * root /var/www/html/script.php ``` **No need** to restart again using this `sudo /etc/init.d/cron restart` Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this one (as root user): 1. `sudo crontab -e` ``` * */2 * * * php -f /var/www/html/script.php > /dev/null 2>&1 ``` OR ``` * */2 * * * cd /var/www/html/; php -f script.php > /dev/null 2>&1 ``` for crontabs runing as www-data user use command `sudo crontab -u www-data -e` for editing after save crontasks will be installed automaticaly. OR You can create **tmp\_crontask\_file** with content `* */2 * * * php -f /var/www/html/script.php > /dev/null 2>&1` AND next use `sudo crontab tmp_crontask_file` for install cron(s) from file (as root) `sudo crontab -u www-data tmp_crontask_file` (as www-data user). --- Edit 1: **WARNING! If you install cron from file (last option) content of file overwrite existing crontab.** Upvotes: 1
2018/03/14
250
511
<issue_start>username_0: How do you convert the following date Mar 7 2017 1:26:46:886AM to 2017-03-07 01:26:00 ``` select DATE_FORMAT(STR_TO_DATE('Mar 7 2017 1:26:46:886AM', '%b-%d-%Y'), '%Y-%m-%d'); ``` I keep getting null<issue_comment>username_1: This is the answer ``` SELECT STR_TO_DATE("Mar 7 2017 1:26:46:886AM", "%M %d %Y %H:%i:%S") ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: you could use ``` select STR_TO_DATE('Mar 7 2017 1:26:46:886AM', '%b %d %Y %h:%i:%s%f' ) ``` Upvotes: 0
2018/03/14
2,650
9,539
<issue_start>username_0: I've got a simple project in Gradle 4.6 and would like to make an executable JAR of it. I've tried `shadow`, `gradle-fatjar-plugin`, `gradle-one-jar`, `spring-boot-gradle-plugin` plugins but neither of them adds my dependencies declared as `implementation` (I don't have any `compile` ones). It works with `compile` e.g. for `gradle-one-jar` plugin but I would like to have `implementation` dependencies.<issue_comment>username_1: You can use the following code. ``` jar { manifest { attributes( 'Main-Class': 'com.package.YourClass' ) } from { configurations.runtimeClasspath.collect { it.isDirectory() ? it : zipTree(it) } } } ``` Be sure to replace `com.package.YourClass` with the fully qualified class name containing `static void main( String args[] )`. This will pack the runtime dependencies. Check the [docs](https://docs.gradle.org/current/userguide/java_library_plugin.html#sec:java_library_configurations_graph) if you need more info. Upvotes: 8 [selected_answer]<issue_comment>username_2: The same task can be achieved using [Gradle Kotlin DSL](https://docs.gradle.org/current/userguide/kotlin_dsl.html) in a similar way: ```kotlin val jar by tasks.getting(Jar::class) { manifest { attributes["Main-Class"] = "com.package.YourClass" } from(configurations .runtime // .get() // uncomment this on Gradle 6+ // .files .map { if (it.isDirectory) it else zipTree(it) }) } ``` Upvotes: 4 <issue_comment>username_3: Based on the accepted answer, I needed to add one more line of code: ``` task fatJar(type: Jar) { manifest { attributes 'Main-Class': 'com.yourpackage.Main' } archiveClassifier = "all" from { configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } configurations.runtimeClasspath.collect { it.isDirectory() ? it : zipTree(it) } } with jar } ``` Without this additional line, it omitted my source files and only added the dependencies: ``` configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } ``` For newer gradle (7+), you may see this error: ``` Execution failed for task ':fatJar'. > Entry [some entry here] is a duplicate but no duplicate handling strategy has been set. Please refer to https://docs.gradle.org/7.1/dsl/org.gradle.api.tasks.Copy.html#org.gradle.api.tasks.Copy:duplicatesStrategy for details. ``` If this happens add a `duplicatesStrategy` such as `duplicatesStrategy "exclude"` to the `fatJar` task. And likewise, for Gradle 7+, you have to just remove the `configuration.compile.collect` line because it is no longer a valid configuration in this version of gradle. Upvotes: 5 <issue_comment>username_4: `from { configurations.runtimeClasspath.collect { it.isDirectory() ? it : zipTree(it) } }` This line is essential to me. Upvotes: 2 <issue_comment>username_5: Kotlin 1.3.72 & JVM plugin, Gradle 6.5.1 Syntax is changing quickly in all these platforms ``` tasks { compileKotlin { kotlinOptions.jvmTarget = "1.8" } compileTestKotlin { kotlinOptions.jvmTarget = "1.8" } val main = sourceSets.main.get() //TODO register("buildFatJar") { group = "app-backend" dependsOn(build) // shouldRunAfter(parent!!.tasks["prepCopyJsBundleToKtor"]) -> This is for incorporating KotlinJS gradle subproject resulting js file. manifest { attributes["Main-Class"] = "com.app.app.BackendAppKt" } from(configurations.compileClasspath.get().files.map { if (it.isDirectory) it else zipTree(it) }) with(jar.get() as CopySpec) archiveBaseName.set("${project.name}-fat") } } ``` Upvotes: 1 <issue_comment>username_6: previous answers are a little outdated nowadays, see here for something working with gradle-7.4: [How to create a fat JAR with Gradle Kotlin script?](https://stackoverflow.com/questions/41794914/how-to-create-a-fat-jar-with-gradle-kotlin-script) ``` tasks.jar { manifest.attributes["Main-Class"] = "com.example.MyMainClass" val dependencies = configurations .runtimeClasspath .get() .map(::zipTree) // OR .map { zipTree(it) } from(dependencies) duplicatesStrategy = DuplicatesStrategy.EXCLUDE } ``` Upvotes: 3 <issue_comment>username_7: Here I provide solutions for [Kotlin DSL](https://docs.gradle.org/current/userguide/kotlin_dsl.html) (build.gradle.kts). Note that the first 3 methods modify the existing `Jar` task of Gradle. ### Method 1: Placing library files beside the result JAR This method does not need `application` or any other plugins. ```kotlin tasks.jar { manifest.attributes["Main-Class"] = "com.example.MyMainClass" manifest.attributes["Class-Path"] = configurations .runtimeClasspath .get() .joinToString(separator = " ") { file -> "libs/${file.name}" } } ``` Note that Java requires us to use relative URLs for the `Class-Path` attribute. So, we cannot use the absolute path of Gradle dependencies (which is also prone to being changed and not available on other systems). If you want to use absolute paths, maybe [this workaround](https://stackoverflow.com/q/47961942/8583692) will work. Create the JAR with the following command: ```sh ./gradlew jar ``` The result JAR will be created in *build/libs/* directory by default. After creating your JAR, copy your library JARs in *libs/* sub-directory of where you put your result JAR. Make sure your library JAR files do not contain space in their file name (their file name should match the one specified by `${file.name}` variable above in the task). ### Method 2: Embedding the libraries in the result JAR (fat or uber JAR) This method too does not need any Gradle plugin. ```kotlin tasks.jar { manifest.attributes["Main-Class"] = "com.example.MyMainClass" val dependencies = configurations .runtimeClasspath .get() .map(::zipTree) // OR .map { zipTree(it) } from(dependencies) duplicatesStrategy = DuplicatesStrategy.EXCLUDE } ``` Creating the JAR is exactly the same as the previous method. ### Method 3: Using the [Shadow plugin](https://github.com/johnrengelman/shadow) (to create a fat or uber JAR) ```kotlin plugins { id("com.github.johnrengelman.shadow") version "6.0.0" } // Shadow task depends on Jar task, so these configs are reflected for Shadow as well tasks.jar { manifest.attributes["Main-Class"] = "org.example.MainKt" } ``` Create the JAR with this command: ```sh ./gradlew shadowJar ``` See [Shadow documentations](https://imperceptiblethoughts.com/shadow/configuration/#configuring-the-jar-manifest) for more information about configuring the plugin. ### Method 4: Creating a new task (instead of modifying the `Jar` task) ```kotlin tasks.create("MyFatJar", Jar::class) { group = "my tasks" // OR, for example, "build" description = "Creates a self-contained fat JAR of the application that can be run." manifest.attributes["Main-Class"] = "com.example.MyMainClass" duplicatesStrategy = DuplicatesStrategy.EXCLUDE val dependencies = configurations .runtimeClasspath .get() .map(::zipTree) from(dependencies) with(tasks.jar.get()) } ``` Running the created JAR ======================= ```sh java -jar my-artifact.jar ``` The above solutions were tested with: * Java 17 * Gradle 7.1 (which uses Kotlin 1.4.31 for *.kts* build scripts) See the official [Gradle documentation for creating uber (fat) JARs](https://docs.gradle.org/current/userguide/working_with_files.html#sec:creating_uber_jar_example). For more information about manifests, see [Oracle Java Documentation: Working with Manifest files](https://docs.oracle.com/javase/tutorial/deployment/jar/manifestindex.html). For difference between `tasks.create()` and `tasks.register()` see [this post](https://stackoverflow.com/a/53654412/8583692). Note that your [resource files will be included in the JAR file automatically](https://stackoverflow.com/q/24724383/8583692) (assuming they were placed in */src/main/resources/* directory or any custom directory set as resources root in the build file). To access a resource file in your application, use this code (note the `/` at the start of names): * Kotlin ```kotlin val vegetables = MyClass::class.java.getResource("/vegetables.txt").readText() // Alternative ways: // val vegetables = object{}.javaClass.getResource("/vegetables.txt").readText() // val vegetables = MyClass::class.java.getResourceAsStream("/vegetables.txt").reader().readText() // val vegetables = object{}.javaClass.getResourceAsStream("/vegetables.txt").reader().readText() ``` * Java ```java var stream = MyClass.class.getResource("/vegetables.txt").openStream(); // OR var stream = MyClass.class.getResourceAsStream("/vegetables.txt"); var reader = new BufferedReader(new InputStreamReader(stream)); var vegetables = reader.lines().collect(Collectors.joining("\n")); ``` Upvotes: 3 <issue_comment>username_8: ``` mainClassName = 'Main' sourceSets { main { java { srcDirs 'src/main/java', 'src/main/resources' } } } jar{ manifest { attributes( "Main-Class": "$mainClassName", ) } from { configurations.runtimeClasspath.collect { it.isDirectory() ? it : zipTree(it) } } exclude 'META-INF/*.RSA', 'META-INF/*.SF','META-INF/*.DSA' duplicatesStrategy = DuplicatesStrategy.EXCLUDE dependsOn ('dependencies') } ``` Upvotes: 1
2018/03/14
1,045
3,900
<issue_start>username_0: I am writing unit-tests while I've stumbled upon following suggestion by Resharper. > > Value assigned is not used by any execution path. > > > in the following code snipet. ``` [Test] [TestCase((int)OddsRoundingModes.Floor)] public void GetBaseOddsRoundingMode_WithCorrectRoundingMode_ShouldReturnCorrectRoundingMode(int oddsRoundingMode) { // Arrange var oddsRoundingModeStr = oddsRoundingMode.ToString(); // <-- suggestion here var mock = new Mock(); var oddsRoundingConfiguration = new OddsRoundingConfiguration(mock.Object); mock.Setup(h => h.TryGetConstant(It.IsAny(), It.IsAny(), out oddsRoundingModeStr)) .Returns(true); // Act var roundingMode = oddsRoundingConfiguration.GetBaseOddsRoundingMode(0); // Assert Assert.AreNotEqual(roundingMode, OddsRoundingModes.None); } ``` But then when I change this to be not initialized at declaration, the mock is not properly setup and test fails, because `oddsRoundingModeStr` is not initialized and the mock return it as null. Why can't Resharper see this? EDIT: ``` public bool TryGetConstant(string name, int siteId, out string value) { value = RetrieveConstant(_constantsModel, name, siteId); return value != null; } private string RetrieveConstant(IConstantsModel model, string constName, int siteId) where T : IConstant, new() { if (model.Constants.TryGetValue(constName, out List values)) { var constant = values.FirstOrDefault(v => v.Name == constName && v.SiteIds.Contains(siteId)); if (constant != null) { return constant.ConstantValue; } } return null; } ```<issue_comment>username_1: `Setup` accepts expression tree - and Moq analyzes that expression tree to create a moq. In this case you are basically saying that Moq should create implementation of `IConstantsModel` which accepts any string, any int, returns true and returns value you provide in `oddsRoundingModeStr` as `out` parameter. So when analyzing this expression tree, Moq will extract actual value of `oddsRoundingModeStr` (which is captured and stored in a field of compiler-generated class) and indeed will use it. Resharper is just unable to realize this, so provides a warning as usual. Small example of how `out` variable value can be extracted from expression tree: ``` class Program { static void Main(string[] args) { int result = 2; // gives warning from your question var back = ExtractOutValue(s => int.TryParse(s, out result)); Debug.Assert(back == result); } static int ExtractOutValue(Expression> exp) { var call = (MethodCallExpression)exp.Body; var arg = (MemberExpression) call.Arguments[1]; return (int) ((FieldInfo)arg.Member).GetValue(((ConstantExpression)arg.Expression).Value); } } ``` Upvotes: 2 <issue_comment>username_2: Following normal C# semantics the value you initialize that variable to is irrelevant, since `out` can't read the data before having assigned a new value to it. Thus the resharper notice is appropriate. I see several ways non standard semantics could be achieved using this code: 1. `out` is a decorated `ref` at the CLR level. So low level code could treat it as equivalent to `ref`. ``` void Main() { Ref r = R; Out o = (Out)Delegate.CreateDelegate(typeof(Out), null, r.Method); int i = 2; o(out i); i.Dump(); } delegate void Out(out int x); delegate void Ref(ref int x); void R(ref int x) { x++; } ``` 2. `Setup` takes delegate and then uses private reflection on the closure object. 3. `Setup` takes an `Expression`, i.e. a syntax tree of the lambda and interprets the expression in non standard ways. In this context, the lambda expression is not C# code intended to be executed, but essentially a [DSL](https://en.wikipedia.org/wiki/Domain-specific_language) describing how to setup the mock. Option 3 seems the most likely Upvotes: 3 [selected_answer]
2018/03/14
602
2,042
<issue_start>username_0: I am using knockout js here. I have a HTML table and the table has 4 columns. I have button to add a row to table and then remove button against each row to delete it. HTML table as below. ``` | Input | First Name | Last Name | Address | | --- | --- | --- | --- | | One Two Three | | | | | ``` My knockout as: ``` (function () { var ViewModel = function () { var self = this; //Empty Row self.items = ko.observableArray([]); self.addRow = function () { self.items.push(new Item()); }; self.removeRow = function (data) { self.items.remove(data); }; } var Item = function (fname, lname, address) { var self = this; self.firstName = ko.observable(fname); self.lastName = ko.observable(lname); self.address = ko.observable(address); }; vm = new ViewModel() ko.applyBindings(vm); })(); ``` When I click add row, it adds the first row but gives me console error: > > knockout.js:73 Uncaught ReferenceError: Unable to process binding "click: >function (){return removeRow }" > Message: removeRow is not defined > > > When I click add row again it gives me another console error: > > Uncaught Error: You cannot apply bindings multiple times to the same element. > > > And when I click removeRow nothing happens. When I comment out the code for removeRow, I am able to add a new row. Not sure where I am going wrong. Here is my jsfiddle: ``` https://jsfiddle.net/aman1981/nz2dtud6/2/ ```<issue_comment>username_1: When using the data binding `foreach` the context changes to the context of its childs. To access the context of the parent, you need to add `$parent` to access `removeRow` ``` | ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Since your defines a new *scope* by using a `foreach: items` binding, you need to use `$parent.removeRow` to refer to the method. ``` ``` See [BindingContext](http://knockoutjs.com/documentation/binding-context.html) Upvotes: 1
2018/03/14
634
2,361
<issue_start>username_0: I'm trying to create a local user in an Azure AD B2C directory which can be used for authentication immediately after creation. ``` Connect-AzureAD -TenantId $targetB2cTenant $passwordProfile = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordProfile $passwordProfile.Password = "<PASSWORD>" $userName = "<EMAIL>" $signInNames = @( (New-Object ` Microsoft.Open.AzureAD.Model.SignInName ` -Property @{Type = "userName"; Value = $userName}) ) $newUser = New-AzureADUser -AccountEnabled $True -DisplayName "testpowershellusercreation" -PasswordProfile $passwordProfile -SignInNames $signInNames -CreationType "LocalAccount" Disconnect-AzureAD ``` From reading the documentation I need to specify the CreationType parameter as "LocalAccount": <https://learn.microsoft.com/en-us/powershell/module/azuread/new-azureaduser?view=azureadps-2.0> [Creating a B2C user with MFA that can immediately login](https://stackoverflow.com/q/49034682) However when I run the powershell code I receive the following error: ``` New-AzureADUser : Error occurred while executing NewUser Code: Request_BadRequest Message: One or more properties contains invalid values. ``` This error message is not present when I remove the -CreationType parameter. What is the correct way to create a local account in a B2C directory using Powershell?<issue_comment>username_1: A sign-in name of **type** "userName" can't contain the '@' character in the **value** property. i.e. You can't set it to an email address. You might want to also set the following parameters for the new user: ``` $passwordProfile = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordProfile $passwordProfile.ForceChangePasswordNextLogin = $False $passwordProfile.Password = "" $newUser = New-AzureADUser ... -PasswordPolicies "DisablePasswordExpiration" ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I think you could also change the type of sign-in name from "userName" to "email", to work around this issue and allow users to continue using their foreign domain email addresses as login, if required. ``` $signInNames = ( (New-Object ` Microsoft.Open.AzureAD.Model.SignInName ` -Property @{Type = "email"; Value = "<EMAIL>"}) ) ``` Upvotes: 0
2018/03/14
588
2,547
<issue_start>username_0: This is my array whose name is **ClassSectionMapObjArray** which contains **StudentSectionObjectArray** and this contains **StudentSectionObject**. In **StudentSectionObject** there is a **studentObj** Array from which i have to fetch **studentName** alphabetically. ``` Array ( [0] => GetClassSectionMap Object ( [studentSectionObject] => Array ( [0] => StudentSection Object ( [studentId] => 1 [studentObj] => Array ( [0] => Student Object ( [studentName] => <NAME> ) ) ) [1] => StudentSection Object ( [studentId] => 2 [studentObj] => Array ( [0] => Student Object ( [studentName] => <NAME> ) ) ) ) ) ) ``` I have to store data in Alphabetical order by **studentName.** I'm a learner, new to php.. Please help. I have also use **usort()** too but it doesn't work. ``` usort($class_section_map_object_array[0]->studentSectionObject,"cmp"); function cmp($a,$b) { return strcmp($a->studentObject->studentName,$b->studentObject->studentName); } ``` But This gives me result in **descending order** according to **studentId**<issue_comment>username_1: A sign-in name of **type** "userName" can't contain the '@' character in the **value** property. i.e. You can't set it to an email address. You might want to also set the following parameters for the new user: ``` $passwordProfile = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordProfile $passwordProfile.ForceChangePasswordNextLogin = $False $passwordProfile.Password = "" $newUser = New-AzureADUser ... -PasswordPolicies "DisablePasswordExpiration" ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I think you could also change the type of sign-in name from "userName" to "email", to work around this issue and allow users to continue using their foreign domain email addresses as login, if required. ``` $signInNames = ( (New-Object ` Microsoft.Open.AzureAD.Model.SignInName ` -Property @{Type = "email"; Value = "<EMAIL>"}) ) ``` Upvotes: 0
2018/03/14
1,398
5,142
<issue_start>username_0: Floating point expressions can sometimes be contracted on the processing hardware, e.g. using fused multiply-and-add as a single hardware operation. Apparently, using these this isn't merely an implementation detail but governed by programming language specification. Specifically, the C89 standard does not allow such contractions, while in C99 they are allowed provided that some macro is defined. See details in [this SO answer](https://stackoverflow.com/a/43357638/1593077). But what about C++? Are floating-point contractions not allowed? Allowed in some standards? Allowed universally?<issue_comment>username_1: Yes, it is allowed. For example in Visual Studio Compiler, by default, `fp_contract` is on. This tells the compiler to use floating-point contraction instructions where possible. Set `fp_contract` to `off` to preserve individual floating-point instructions. ``` // pragma_directive_fp_contract.cpp // on x86 and x64 compile with: /O2 /fp:fast /arch:AVX2 // other platforms compile with: /O2 #include // remove the following line to enable FP contractions #pragma fp\_contract (off) int main() { double z, b, t; for (int i = 0; i < 10; i++) { b = i \* 5.5; t = i \* 56.025; z = t \* i + b; printf("out = %.15e\n", z); } } ``` Detailed information about [Specify Floating-Point Behavior](https://learn.microsoft.com/en-us/cpp/build/reference/fp-specify-floating-point-behavior). [Using the GNU Compiler Collection (GCC):](https://gcc.gnu.org/onlinedocs/gcc/Floating-point-implementation.html) The default state for the `FP_CONTRACT` pragma (C99 and C11 7.12.2). This pragma is not implemented. Expressions are currently only contracted if -`ffp-contract=fast`, `-funsafe-math-optimizations` or `-ffast-math` are used. Upvotes: 1 <issue_comment>username_2: Summary ======= Contractions are permitted, but a facility is provided for the user to disable them. Unclear language in the standard clouds the issue of whether disabling them will provide desired results. I investigated this in the official C++ 2003 standard and the 2017 n4659 draft. C++ citations are from 2003 unless otherwise indicated. Extra Precision and Range ========================= The text “contract” does not appear in either document. However, clause 5 Expressions [expr] paragraph 10 (same text in 2017’s 8 [expr] 13) says: > > The values of the floating operands and the results of floating expressions may be represented in greater precision and range than that required by the type; the types are not changed thereby. > > > I would prefer this statement explicitly stated whether this extra precision and range could be used freely (the implementation may use it in some expressions, including subexpressions, while not using it in others) or had to be used uniformly (if the implementation uses extra precision, it must use it in every floating-point expression) or according to some other rules (such as it may use one precision for `float`, another for `double`). If we interpret it permissively, it means that, in `a*b+c`, `a*b` could be evaluated with infinite precision and range, and then the addition could be evaluated with whatever precision and range is normal for the implementation. This is mathematically equivalent to contraction, as it has the same result as evaluating `a*b+c` with a fused multiply-add instruction. Hence, with this interpretation, implementations may contract expressions. Contractions Inherited From C ============================= 17.4.1.2 [lib.headers] 3 (similar text in 2017’s 192.168.3.11 [headers] 3) says: > > The facilities of the Standard C Library are provided in 18 additional headers, as shown in Table 12… > > > Table 12 includes , and paragraph 4 indicates this corresponds to `math.h`. Technically, the C++ 2003 standard refers to the C 1990 standard, but I do not have it in electronic form and do not know where my paper copy is, so I will use the C 2011 standard (but unofficial draft N1570), which the C++ 2017 draft refers to. The C standard defines, in , a pragma `FP_CONTRACT`: ``` #pragma STDC FP_CONTRACT on-off-switch ``` where *on-off-switch* is `on` to allow contraction of expressions or `off` to disallow them. It also says the default state for the pragma is implementation-defined. The C++ standard does not define “facility” or “facilities.” A dictionary definition of “facility” is “a place, amenity, or piece of equipment provided for a particular purpose” (*New Oxford American Dictionary*, Apple Dictionary application version 2.2.2 (203)). An amenity is “a desirable or useful feature or facility of a building or place.” A pragma is a useful feature provided for a particular purpose, so it seems to be a facility, so it is included in . Hence, using this pragma should permit or disallow contractions. Conclusions =========== * Contractions are permitted when `FP_CONTRACT` is on, and it may be on by default. * The text of 8 [expr] 13 can be interpreted to effectively allow contractions even if `FP_CONTRACT` is off but is insufficiently clear for definitive interpretation. Upvotes: 4 [selected_answer]
2018/03/14
1,321
4,838
<issue_start>username_0: A very nice tool to check for dead links (e.g. links pointing to 404 errors) is [`wget --spider`](https://www.digitalocean.com/community/tutorials/how-to-find-broken-links-on-your-website-using-wget-on-debian-7). However, I have a slightly different use-case where I generate a static website, and want to check for broken links before uploading. More precisely, I want to check both: * Relative links like `[file.pdf](some/file.pdf)` * Absolute links, most likely to external sites like `[example](http://example.com)`. I tried `wget --spyder --force-html -i file-to-check.html`, which reads the local file, considers it as HTML and follows each links. Unfortunately, it can't deal with relative links within the local HTML file (errors out with `Cannot resolve incomplete link some/file.pdf`). I tried using `file://` but `wget` does not support it. Currently, I have a hack based on running a local webserver throught `python3 http.serve` and checking the local files through HTTP: ``` python3 -m http.server & pid=$! sleep .5 error=0 wget --spider -nd -nv -H -r -l 1 http://localhost:8000/index.html || error=$? kill $pid wait $pid exit $error ``` I'm not really happy with this for several reasons: * I need this `sleep .5` to wait for the webserver to be ready. Without it, the script fails, but I can't guarantee that 0.5 seconds will be enough. I'd prefer having a way to start the `wget` command when the server is ready. * Conversely, this `kill $pid` feels ugly. Ideally, `python3 -m http.server` would have an option to run a command when the server is ready and would shutdown itself after the command is completed. That sounds doable by writing a bit of Python, but I was wondering whether a cleaner solution exists. Did I miss anything? Is there a better solution? I'm mentioning `wget` in my question because it does almost what I want, but using `wget` is not a requirement for me (nor is `python -m http.server`). I just need to have something easy to run and automate on Linux.<issue_comment>username_1: So I think you are running in the right direction. I would use `wget` and `python` as they are two readily available options on many systems. And the good part is that it gets the job done for you. Now what you want is to listen for `Serving HTTP on 0.0.0.0` from the `stdout` of that process. So I would start the process using something like below ``` python3 -u -m http.server > ./myserver.log & ``` Note the `-u` I have used here for unbuffered output, this is really important Now next is waiting for this text to appear in `myserver.log` ``` timeout 10 awk '/Serving HTTP on 0.0.0.0/{print; exit}' <(tail -f ./myserver.log) ``` So `10` seconds is your maximum wait time here. And rest is self-explanatory. Next about your `kill $pid`. I don't think it is a problem, but if you want it to be more like the way a user does it then I would change it to ``` kill -s SIGINT $pid ``` This will be equivalent to you processing `CTRL+C` after launching the program. Also I would handle the `SIGINT` my bash script as well using something like below <https://unix.stackexchange.com/questions/313644/execute-command-or-function-when-sigint-or-sigterm-is-send-to-the-parent-script/313648> The above basically adds below to top of the bash script to handle you killing the script using `CTRL+C` or external kill signal ``` #!/bin/bash exit_script() { echo "Printing something special!" echo "Maybe executing other commands!" trap - SIGINT SIGTERM # clear the trap kill -- -$$ # Sends SIGTERM to child/sub processes } trap exit_script SIGINT SIGTERM ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: username_1's answer is correct, and following the advices given there one can write a clean and short shell script (relying on Python and awk). Another solution is to write the script completely in Python, giving a slightly more verbose but arguably cleaner script. The server can be launched in a thread, then the command to check the website is executed, and finally the server is shut down. We don't need to parse the textual output nor to send a signal to an external process anymore. The key parts of the script are therefore: ``` def start_server(port, server_class=HTTPServer, handler_class=SimpleHTTPRequestHandler): server_address = ('', port) httpd = server_class(server_address, handler_class) thread = threading.Thread(target=httpd.serve_forever) thread.start() return httpd def main(cmd, port): httpd = start_server(port) status = subprocess.call(cmd) httpd.shutdown() sys.exit(status) ``` I wrote a slightly more advanced script (with a bit of command-line option parsing on top of this) and published it as: <https://gitlab.com/moy/check-links> Upvotes: 0
2018/03/14
526
1,599
<issue_start>username_0: The List type is created with ``` data [] a = [] | a : [a] ``` But I can't create my own type with the same structure: ``` data %% a = %% | a : %a% ``` > > error: parse error on input `%%' > > ><issue_comment>username_1: > > The List type is created with > > > > ``` > data [] a = [] | a : [a] > > ``` > > No, it isn't. If you look at [the source](https://hackage.haskell.org/package/base-4.10.1.0/docs/src/GHC.Base.html) (for GHC; other compilers may do it differently), it says ``` data [] a = MkNil ``` but ~~this is just a marker for the compiler~~ (not even this, see chepner's comment). This is because ``` data [] a = [] | a : [a] ``` isn't legal syntax in Haskell. What is true is that list works *as if* it were defined this way: it's entirely equivalent to ``` data List a = Nil | Cons a (List a) ``` except for the names. Upvotes: 3 [selected_answer]<issue_comment>username_2: Type and constructor names must either be alphanumeric names, starting with uppercase ``` data MyType a b = K a | L b a ``` or be symbolic infix operators, starting with `:` ``` data a :** b = K a | b :+-& a ``` Both types above are perfectly isomorphic: we only replaced `MyType` with the infix `:**` and `L` with the infix `:+-&`. Also note that infixes must be binary, i.e. take two arguments. Alphanumeric names do not have such constraint (e.g. `K` above only takes one argument). List syntax `[]` is specially handled by the compiler, similarly to `(,),(,,),...` for tuples. Only `:` follows the general rule (perhaps incidentally). Upvotes: 1
2018/03/14
994
3,079
<issue_start>username_0: I'm looking for a solution to transform a list in my values.yaml in a comma separated list. values.yaml ``` app: logfiletoexclude: - "/var/log/containers/kube*" - "/var/log/containers/tiller*" ``` \_helpers.tpl: ``` {{- define "pathtoexclude" -}} {{- join "," .Values.app.logfiletoexclude }} {{- end -}} ``` configmap: ``` @type tail path /var/log/containers/*.log exclude_path [{{ template "pathtoexclude" . }}] ... ... ``` The problem is there is missing quotes in my result ``` exclude_path [/var/log/containers/kube*,/var/log/containers/tiller*] ``` How can I fix it to be able to have: ``` exclude_path ["/var/log/containers/kube*","/var/log/containers/tiller*"] ``` I've try with: ``` {{- join "," .Values.app.logfiletoexclude | quote}} ``` but this give me: ``` exclude_path ["/var/log/containers/kube*,/var/log/containers/tiller*"] ``` Thanks<issue_comment>username_1: Double quotes should be escaped in `.Values.app.logfiletoexclude` values. `values.yaml` is: ``` app: logfiletoexclude: - '"/var/log/containers/kube*"' - '"/var/log/containers/tiller*"' ``` `_helpers.tpl` is: ``` {{- define "pathtoexclude" -}} {{- join "," .Values.app.logfiletoexclude }} {{- end -}} ``` And finally we have: ``` exclude_path ["/var/log/containers/kube*","/var/log/containers/tiller*"] ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Alternatively, the following also worked for me without having to extra-quote the input values: ``` "{{- join "\",\"" .Values.myArrayField }}" ``` Of course, this only works for non-empty arrays and produces a single empty quoted value for an empty array. Someone knows a simple guard one could integrate here? Upvotes: 2 <issue_comment>username_3: Fluentd allows what they call "shorthand syntax" for arrays in their configs <https://docs.fluentd.org/configuration/config-file#supported-data-types-for-values>, which is a string with commas in it to separate values, like "value1,value2,value3". So if you can assume there will be no commas in your values, you could save yourself the headache of double quoting with `'"..."'` and just do: In your `values.yaml`: ``` app: logfiletoexclude: - /var/log/containers/kube* - /var/log/containers/tiller* ``` In your `_helpers.tpl`: ``` {{- define "pathtoexclude" -}} {{- join "," .Values.app.logfiletoexclude }} {{- end -}} ``` In your configmap: ``` @type tail path /var/log/containers/*.log exclude_path {{ template "pathtoexclude" . }} ... ``` Upvotes: 1 <issue_comment>username_4: Here's how I solved it: in `values.yaml` (quotes don't matter): ```yaml elements: - first element - "second element" - 'third element' ``` in `_helpers.tpl`: ```golang {{- define "mychart.commaJoinedQuotedList" -}} {{- $list := list }} {{- range .Values.elements }} {{- $list = append $list (printf "\"%s\"" .) }} {{- end }} {{- join ", " $list }} {{- end }} ``` in `templates/mytemplate.yaml`: ```golang elements: {{ include "mychart.commaJoinedQuotedList" . }} ``` Upvotes: 2
2018/03/14
629
1,913
<issue_start>username_0: I am pretty new on CSS/Html/JS and want to create a series of Boxes (loaded from a json file) and display them horizontally. Something like This: [![enter image description here](https://i.stack.imgur.com/ZgxMR.png)](https://i.stack.imgur.com/ZgxMR.png) I tried to achieve this with the following code: ``` .wrapper { display: grid; grid-template-columns: auto auto; } .Product { display: grid; grid-template-columns: auto 1fr; background-color: rgb(2, 121, 61); padding: 10px; } Pos: test1 Artikel: test2 Bezeichnung: test3 Menge: test4 Einheit:test5 Lagerplatz:test6 Intern:test7 Pos: test1 Artikel: test2 Bezeichnung: test3 Menge: test4 Einheit:test5 Lagerplatz:test6 Intern:test7 ``` But the result looks like this: [![css_result](https://i.stack.imgur.com/iu16r.png)](https://i.stack.imgur.com/iu16r.png) As you can see the divs are not horizontal and the width fills the screen. I want the boxes to be horizontally aligned and not to stop at the screen end. If I could put the whole element into a horizontal scroll view I would be even happier. Thanks for your time.<issue_comment>username_1: In your product class use inline-grid... ```css .wrapper { display: grid; grid-template-columns: auto auto; } .Product { display: inline-grid; grid-template-columns: auto 1fr; background-color: rgb(2, 121, 61); padding: 10px; } ``` ```html Pos: test1 Artikel: test2 Bezeichnung: test3 Menge: test4 Einheit:test5 Lagerplatz:test6 Intern:test7 Pos: test1 Artikel: test2 Bezeichnung: test3 Menge: test4 Einheit:test5 Lagerplatz:test6 Intern:test7 ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You specify the `wrapper` class wrong. You should put `wrapper` not `Wrapper` in the main div `class`. If you want a space between columns you can use `grid-column-gap: 20px;` Upvotes: 0
2018/03/14
1,204
4,626
<issue_start>username_0: I have a pretty simple query here. I have a view page with an div and a form element. This is how they look. ``` ``` I need to access the value of candy inside input's attribute (it can be any attribute). I tried the code as I have shown above but that didnt work. I researched on StackOverflow too but couldnt find anything satisfactory. Please help out. Edit: Thank you everyone. I found the answer to that, which I am gonna mark. Also, deleting this question so that it doesnt confuse someone else.<issue_comment>username_1: Do it in JavaScript outside of code but after the objects exist. Here's an example of how to achieve this: ```js var candy = document.getElementById('candy').getAttribute('data-value'); document.getElementById('input').value = candy; ``` ```html ``` As mentioned in the comments, please make sure your JavaScript code is loaded after your markup. There are various ways to do this, including waiting for the dom to load. See [$(document).ready equivalent without jQuery](https://stackoverflow.com/questions/799981/document-ready-equivalent-without-jquery) and [How does the location of a script tag in a page affect a JavaScript function that is defined in it?](https://stackoverflow.com/questions/496646/how-does-the-location-of-a-script-tag-in-a-page-affect-a-javascript-function-tha) for more information. Upvotes: 2 <issue_comment>username_2: If I assume you want to do this at page load, do it like this *Note 1, custom attributes should have a `data-` prefix and use `.dataset` to access its value.* *Note 2, for older browsers like IE10 and below, you need `getAttribute` (as in 2nd sample below).* Stack snippet 1 ```html document.getElementById('candy2').value = document.getElementById('candy').dataset.value ``` --- Stack snippet 2 ```html document.getElementById('candy2').value = document.getElementById('candy').getAttribute('data-value') ``` Upvotes: 2 <issue_comment>username_3: Try this ```js document.getElementById('input').value = document.getElementById('candy').dataset.value ``` ```html ``` Upvotes: 1 <issue_comment>username_4: **This is not how you should be doing this.** JavaScript should be separated out of the HTML completely to avoid a whole host of issues. Including JavaScript in the HTML as you are attempting is a 20+ year old technique that was used before we had standards. Next, a `div` element can't have a `value` attribute. `value` is only for form fields. But, you can create a **[`data-*`](https://developer.mozilla.org/en-US/docs/Learn/HTML/Howto/Use_data_attributes)** attribute, which allows for you to create custom attributes. You can then extract that value using the **[`.dataset`](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/dataset)** property. See below: ```js // This code would be placed inside of and tags and the whole // thing would be placed just before the closing body tag (). document.getElementById("result").value = document.getElementById('candy').dataset.value; ``` ```html ``` Upvotes: 0 <issue_comment>username_5: Put that code either within a script tag or in a separated js file. Further, always bind the event `DOMContentLoaded` when you need to manipulate DOM elements. [DOMContentLoaded](https://developer.mozilla.org/en-US/docs/Web/Events/DOMContentLoaded) ---------------------------------------------------------------------------------------- > > The [`DOMContentLoaded`](https://developer.mozilla.org/en-US/docs/Web/Events/DOMContentLoaded) event is fired when the initial HTML document has been completely loaded and parsed, without waiting for stylesheets, images, and subframes to finish loading. A very different event load should be used only to detect a fully-loaded page. It is an incredibly popular mistake to use load where DOMContentLoaded would be much more appropriate, so be cautious. > > > This way, your logic is totally consistent. ```js document.addEventListener("DOMContentLoaded", function(event) { console.log("DOM fully loaded and parsed"); document.getElementById('candy2').value = document.getElementById('candy').getAttribute('value') }); ``` ```html ``` A recommendation is to use [`data-attributes`](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/dataset) because the value attribute is related to form fields: ```js document.addEventListener("DOMContentLoaded", function(event) { console.log("DOM fully loaded and parsed"); document.getElementById('candy2').value = document.getElementById('candy').dataset.value; }); ``` ```html ``` Upvotes: 0
2018/03/14
3,041
10,394
<issue_start>username_0: I receive the following exception when try to create a ListView.builder using data from local json. [ERROR:topaz/lib/tonic/logging/dart\_error.cc(16)] Unhandled exception: type 'List' is not a subtype of type 'Map' in type cast where List is from dart:core Map is from dart:core Please someone could help me? ``` Future> getProdutosFromAsset() async { return new Stream.fromFuture(rootBundle.loadString('assets/produtos.json')) .transform(json.decoder) .expand((jsonBody) => (jsonBody as Map)['results']) .map((jsonPlace) => new Produto.fromJson(jsonPlace)); } class Despensa extends StatefulWidget { @override DespensaState createState() => new DespensaState(); } class DespensaState extends State { var produtoList = []; dataJson() async { final stream = await getProdutosFromAsset(); stream.listen((place) => setState(() => produtoList.add(place))); produtoList.forEach((f) => print(f)); } @override initState() { super.initState(); dataJson(); } @override Widget build(BuildContext context) { return new Scaffold( appBar: new AppBar( title: new Row( mainAxisAlignment: MainAxisAlignment.start, mainAxisSize: MainAxisSize.max, children: [ new Row(children: [ new Icon(Icons.kitchen), new Text(' Despensa'), ]), new Expanded( child: new Icon(Icons.search), ) ], ), ), body: new ListView.builder( itemBuilder: (BuildContext context, int index) => new ProdutoItem(produtoList[index]), itemCount: produtoList.length, ), ); } } ``` **produtos.json** ``` [ { "prodcd": 1, "proditem": false, "prodtitle": "Bebidas", "prodcont": 0, "prodicon": "graphics/bebidas.png", "proddesde": "13/03/2018", "prodchildren": [ { "prodcd": 101, "proditem": false, "prodtitle": "Aguas", "prodcont": 0, "prodicon": "", "proddesde": "13/03/2018", "prodchildren": [ { "prodcd": 1001, "proditem": false, "prodtitle": "Agua Tonica", "prodcont": 0, "prodicon": "", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 1002, "proditem": false, "prodtitle": "Agua Mineral", "prodcont": 0, "prodicon": "", "proddesde": "13/03/2018", "prodchildren": [] } ] }, { "prodcd": 102, "proditem": false, "prodtitle": "Energeticos", "prodcont": 0, "prodicon": "", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 103, "proditem": false, "prodtitle": "Chas", "prodcont": 0, "prodicon": "", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 104, "proditem": true, "prodtitle": "Sucos", "prodcont": 1, "prodicon": "", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 105, "proditem": false, "prodtitle": "Refrescos", "prodcont": 0, "prodicon": "", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 106, "proditem": true, "prodtitle": "Refrigerantes", "prodcont": 0, "prodicon": "", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 107, "proditem": false, "prodtitle": "Cervejas", "prodcont": 0, "prodicon": "", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 108, "proditem": false, "prodtitle": "Destilados", "prodcont": 0, "prodicon": "", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 109, "proditem": false, "prodtitle": "Whisky", "prodcont": 0, "prodicon": "", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 110, "proditem": false, "prodtitle": "Vinhos e Espumantes", "prodcont": 0, "prodicon": "", "proddesde": "13/03/2018", "prodchildren": [] } ] }, { "prodcd": 2, "proditem": false, "prodtitle": "Carnes e Aves", "prodcont": 0, "prodicon": "graphics/carnes_e_aves.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 3, "proditem": false, "prodtitle": "Cereais e Farinhas", "prodcont": 0, "prodicon": "graphics/cereais.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 4, "proditem": false, "prodtitle": "Congelados", "prodcont": 0, "prodicon": "graphics/congelados.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 5, "proditem": false, "prodtitle": "Enlatados", "prodcont": 0, "prodicon": "graphics/enlatados.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 6, "proditem": false, "prodtitle": "Frios e Laticínios", "prodcont": 0, "prodicon": "graphics/frios_laticinios.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 7, "proditem": false, "prodtitle": "Higiene e Beleza", "prodcont": 0, "prodicon": "graphics/higiene.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 8, "proditem": false, "prodtitle": "Hortifruti", "prodcont": 0, "prodicon": "graphics/hortifruti.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 9, "proditem": false, "prodtitle": "Limpeza", "prodcont": 0, "prodicon": "graphics/limpeza.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 10, "proditem": false, "prodtitle": "Massas e biscoitos", "prodcont": 0, "prodicon": "graphics/massas.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 11, "proditem": false, "prodtitle": "Mercearia", "prodcont": 0, "prodicon": "graphics/mercearia.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 12, "proditem": false, "prodtitle": "Padaria", "prodcont": 0, "prodicon": "graphics/padaria.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 13, "proditem": false, "prodtitle": "Perfumaria", "prodcont": 0, "prodicon": "graphics/perfumaria.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 14, "proditem": false, "prodtitle": "Pescados", "prodcont": 0, "prodicon": "graphics/pescados.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 15, "proditem": false, "prodtitle": "Pet shop", "prodcont": 0, "prodicon": "graphics/petshop.png", "proddesde": "13/03/2018", "prodchildren": [] }, { "prodcd": 16, "proditem": false, "prodtitle": "Utilidades Domésticas", "prodcont": 0, "prodicon": "graphics/ud.png", "proddesde": "13/03/2018", "prodchildren": [] } ] ```<issue_comment>username_1: Your expansion thinks that you JSON is a giant object with a `results` key, but actually your JSON is already the list you are trying to stream to. So, you should fix your stream expansion to: `.expand((List> list) => list)`. Like: *main.dart* ``` import 'dart:async'; import 'dart:io'; import 'dart:convert'; class Product { final String title; Product({this.title}); factory Product.fromJson(Map data) { return new Product(title: data['prodtitle']); } } void main() { final Stream stream = new Stream.fromFuture(new File('products.json').readAsString()); stream .transform(JSON.decoder) .expand((List> list) => list) .map((Map data) => new Product.fromJson(data)) .listen((Product p) => print(p.title)); } ``` *output* [![enter image description here](https://i.stack.imgur.com/KyOem.png)](https://i.stack.imgur.com/KyOem.png) Upvotes: 1 <issue_comment>username_2: Well i tried a little more and find one possible solution from <NAME>. Now it works as expected. Thanks for all support. ``` List produtoList = []; Future loadAssetJson() async{ return await rootBundle.loadString('assets/produtos.json'); } Future loadProdutos() async{ String jsonProdutos = await loadAssetJson(); parseJsonProdutos(jsonProdutos).forEach((f) => produtoList.add(new Produto.fromJson(f))); } List parseJsonProdutos(String jsonString){ List jsonDecoded = JSON.decode(jsonString); return jsonDecoded; } class Despensa extends StatefulWidget { @override DespensaState createState() => new DespensaState(); } class DespensaState extends State { @override initState() { super.initState(); loadProdutos(); } ``` Upvotes: 0
2018/03/14
500
1,544
<issue_start>username_0: ``` #include< stdio.h> int main() { char *name[] = { "hello" , "world" , "helloworld" }; /* character array */ printf("%s", (*(name+2)+7)); return 0; } ``` The above code prints out "rld". I wants to print only "r".<issue_comment>username_1: Use %c: ``` printf("%c", *(*(name+2)+7)); ``` Upvotes: 2 <issue_comment>username_2: You can use simple trick as follows, ``` printf("%c", name[2][7]); ``` And as you want character, you should use `%c`. Here is working [demo](https://onlinegdb.com/SyW_sc8FG). Upvotes: 1 <issue_comment>username_3: For starters you do not have a character array. You have an array of pointers. Also it would be better to declare the type of array elements like ``` const char * ``` because string literals are immutable in C. And instead of the `%s` specifier you need to use the specifier `%c` to output just a character. A simple and clear way to output the target character of the third element of the array is ``` printf("%c", name[2][7]); ``` Or using the pointer arithmetic you can write ``` printf("%c", *(*( name + 2 )+7 ) ); ``` Here is a demonstrative program ``` #include int main(void) { const char \*name[] = { "hello" , "world" , "helloworld" }; printf( "%c\n", \*( \* ( name + 2 ) + 7 ) ); printf( "%c\n", name[2][7] ); return 0; } ``` Its output is ``` r r ``` Take into account that according to the C Standard the function `main` without parameters shall be declared like ``` int main( void ) ``` Upvotes: 2
2018/03/14
499
1,649
<issue_start>username_0: Drupal 7 to Drupal 8 migration. I've migrated terms in the source language, but I'm unable to migrate term's translations (i18n) - name and description. I've created a custom source plugin, where I create new fields with translations for taxonomy name and description. So how to migrate term translations? D6 example doesn't work. Thank you.<issue_comment>username_1: Use %c: ``` printf("%c", *(*(name+2)+7)); ``` Upvotes: 2 <issue_comment>username_2: You can use simple trick as follows, ``` printf("%c", name[2][7]); ``` And as you want character, you should use `%c`. Here is working [demo](https://onlinegdb.com/SyW_sc8FG). Upvotes: 1 <issue_comment>username_3: For starters you do not have a character array. You have an array of pointers. Also it would be better to declare the type of array elements like ``` const char * ``` because string literals are immutable in C. And instead of the `%s` specifier you need to use the specifier `%c` to output just a character. A simple and clear way to output the target character of the third element of the array is ``` printf("%c", name[2][7]); ``` Or using the pointer arithmetic you can write ``` printf("%c", *(*( name + 2 )+7 ) ); ``` Here is a demonstrative program ``` #include int main(void) { const char \*name[] = { "hello" , "world" , "helloworld" }; printf( "%c\n", \*( \* ( name + 2 ) + 7 ) ); printf( "%c\n", name[2][7] ); return 0; } ``` Its output is ``` r r ``` Take into account that according to the C Standard the function `main` without parameters shall be declared like ``` int main( void ) ``` Upvotes: 2
2018/03/14
1,135
3,560
<issue_start>username_0: I use AngularJs and I have the below code : ``` <tr ng-repeat=" a in table> {{a.ClientID}} | {{a.SiteName}} | {{a.Group}} | ``` the result of this table is: ``` ClientID SiteName Group ========= ========== ======= 1 Ikaria Group 2 Ikaria Group 3 Limnos Null 4 Pythion Group ``` I want to create a filter when AlarmGroup = Group and SiteName multiple times gives me below result : ``` ClientID SiteName Group ========= ========== ======= 1 (+) Ikaria Group 3 Limnos Null 4 Pythion Group ``` When I click ClientID (+) I want to see and row with ClientID = 2 Do you have any idea ? Thanks!!<issue_comment>username_1: You can easily acieve that using custom unique filter. Here is the working code ```html (function() { var app = angular.module("testApp", ['ui.bootstrap']); app.controller('testCtrl', ['$scope', '$http', function($scope, $http) { $scope.showDupes = function(site){ if($scope.siteName == site){ $scope.siteName = undefined; } else{ $scope.siteName = site; } }; $scope.filter='SiteName'; $scope.getCount = function(i) { var iCount = iCount || 0; for (var j = 0; j < $scope.tableData.length; j++) { if ($scope.tableData[j].SiteName == i) { iCount++; } } return iCount; } $scope.tableData = [{"ClientID":1,"SiteName":"Ikaria","Group":"Group"},{"ClientID":2,"SiteName":"Ikaria","Group":"Group"},{"ClientID":3,"SiteName":"Limnos","Group":"Null"},{"ClientID":4,"SiteName":"Limnos","Group":"Null"},{"ClientID":5,"SiteName":"Limnos","Group":"Null"},{"ClientID":6,"SiteName":"Limnos","Group":"Null"},{"ClientID":7,"SiteName":"Limnos","Group":"Null"},{"ClientID":8,"SiteName":"Pythion","Group":"Group"}]; }]); app.filter('unique', function() { return function(items, filterOn, dupe) { if (filterOn === false) { return items; } if ((filterOn || angular.isUndefined(filterOn)) && angular.isArray(items)) { var hashCheck = {}, newItems = []; var extractValueToCompare = function(item) { if (angular.isObject(item) && angular.isString(filterOn)) { return item[filterOn]; } else { return item; } }; angular.forEach(items, function(item) { var valueToCheck, isDuplicate = false; for (var i = 0; i < newItems.length; i++) { if (newItems[i][filterOn] != dupe && angular.equals(extractValueToCompare(newItems[i]), extractValueToCompare(item))) { isDuplicate = true; break; } } item.isDuplicate = isDuplicate; newItems.push(item); }); items = newItems; } return items; }; }); }()); | | | | | --- | --- | --- | | {{a.ClientID}} | {{a.SiteName}} + {{getCount(a.SiteName)-1}} | {{a.Group}} | Reset ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: ```js angular.module('app', []).controller('ctrl', function($scope){ $scope.data = [ {ClientID:1, SiteName: 'Ikaria', Group: 'Group'}, {ClientID:2, SiteName: 'Ikaria', Group: 'Group'}, {ClientID:3, SiteName: 'Ikaria', Group: 'Group'}, {ClientID:4, SiteName: 'Limnos', Group: 'Null'}, {ClientID:5, SiteName: 'Pythion', Group: 'Group'}, {ClientID:6, SiteName: 'Pythion', Group: 'Group'}, {ClientID:7, SiteName: 'Test', Group: 'Null'} ]; }) ``` ```css table, th, td { border: 1px solid black; border-collapse: collapse; } ``` ```html | ClientID | SiteName | Group | | {{item.ClientID}} [(+)](#) | {{item.SiteName}} | {{item.Group}} | ``` Upvotes: 1
2018/03/14
1,413
3,965
<issue_start>username_0: Im trying to plot a dataframe like this: ``` A = pd.DataFrame([[1, 5, 2, 8, 2], [2, 4, 4, 20, 2], [3, 3, 1, 20, 2], [4, 2, 2, 1, 0], [5, 1, 4, -5, -4], [1, 5, 2, 2, -20], [2, 4, 4, 3, 0], [3, 3, 1, -1, -1], [4, 2, 2, 0, 0], [5, 1, 4, 20, -2]], columns=['a', 'b', 'c', 'd', 'e'], index=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) plt.plot(np.cumsum(A.transpose())) ``` It looks like this: [![enter image description here](https://i.stack.imgur.com/518TN.png)](https://i.stack.imgur.com/518TN.png) However, I would like the first print of the chart to start at 0 for all lines. I tried adding another column according to [this](https://stackoverflow.com/questions/25122099/move-column-by-name-to-front-of-table-in-pandas), but didn't work. For some reason the index didn't change and kept the newly created column at the end in the plot. ``` A['s'] = 0 cols = list(A) cols.insert(0, cols.pop(cols.index('s'))) A = A.loc[:, cols] plt.plot(np.cumsum(A.transpose())) ``` [![enter image description here](https://i.stack.imgur.com/JHGh1.png)](https://i.stack.imgur.com/JHGh1.png)<issue_comment>username_1: You can easily acieve that using custom unique filter. Here is the working code ```html (function() { var app = angular.module("testApp", ['ui.bootstrap']); app.controller('testCtrl', ['$scope', '$http', function($scope, $http) { $scope.showDupes = function(site){ if($scope.siteName == site){ $scope.siteName = undefined; } else{ $scope.siteName = site; } }; $scope.filter='SiteName'; $scope.getCount = function(i) { var iCount = iCount || 0; for (var j = 0; j < $scope.tableData.length; j++) { if ($scope.tableData[j].SiteName == i) { iCount++; } } return iCount; } $scope.tableData = [{"ClientID":1,"SiteName":"Ikaria","Group":"Group"},{"ClientID":2,"SiteName":"Ikaria","Group":"Group"},{"ClientID":3,"SiteName":"Limnos","Group":"Null"},{"ClientID":4,"SiteName":"Limnos","Group":"Null"},{"ClientID":5,"SiteName":"Limnos","Group":"Null"},{"ClientID":6,"SiteName":"Limnos","Group":"Null"},{"ClientID":7,"SiteName":"Limnos","Group":"Null"},{"ClientID":8,"SiteName":"Pythion","Group":"Group"}]; }]); app.filter('unique', function() { return function(items, filterOn, dupe) { if (filterOn === false) { return items; } if ((filterOn || angular.isUndefined(filterOn)) && angular.isArray(items)) { var hashCheck = {}, newItems = []; var extractValueToCompare = function(item) { if (angular.isObject(item) && angular.isString(filterOn)) { return item[filterOn]; } else { return item; } }; angular.forEach(items, function(item) { var valueToCheck, isDuplicate = false; for (var i = 0; i < newItems.length; i++) { if (newItems[i][filterOn] != dupe && angular.equals(extractValueToCompare(newItems[i]), extractValueToCompare(item))) { isDuplicate = true; break; } } item.isDuplicate = isDuplicate; newItems.push(item); }); items = newItems; } return items; }; }); }()); | | | | | --- | --- | --- | | {{a.ClientID}} | {{a.SiteName}} + {{getCount(a.SiteName)-1}} | {{a.Group}} | Reset ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: ```js angular.module('app', []).controller('ctrl', function($scope){ $scope.data = [ {ClientID:1, SiteName: 'Ikaria', Group: 'Group'}, {ClientID:2, SiteName: 'Ikaria', Group: 'Group'}, {ClientID:3, SiteName: 'Ikaria', Group: 'Group'}, {ClientID:4, SiteName: 'Limnos', Group: 'Null'}, {ClientID:5, SiteName: 'Pythion', Group: 'Group'}, {ClientID:6, SiteName: 'Pythion', Group: 'Group'}, {ClientID:7, SiteName: 'Test', Group: 'Null'} ]; }) ``` ```css table, th, td { border: 1px solid black; border-collapse: collapse; } ``` ```html | ClientID | SiteName | Group | | {{item.ClientID}} [(+)](#) | {{item.SiteName}} | {{item.Group}} | ``` Upvotes: 1
2018/03/14
565
1,503
<issue_start>username_0: I want to turn the entire content of a numeric (incl. NA's) data frame into one column. What would be the smartest way of achieving the following? ``` >df <- data.frame(C1=c(1,NA,3),C2=c(4,5,NA),C3=c(NA,8,9)) >df C1 C2 C3 1 1 4 NA 2 NA 5 8 3 3 NA 9 >x <- mysterious_operation(df) >x [1] 1 NA 3 4 5 NA NA 8 9 ``` I want to calculate the mean of this vector, so ideally I'd want to remove the NA's within the mysterious\_operation - the data frame I'm working on is very large so it will probably be a good idea.<issue_comment>username_1: The mysterious operation you are looking for is called `unlist`: ``` > df <- data.frame(C1=c(1,NA,3),C2=c(4,5,NA),C3=c(NA,8,9)) > unlist(df, use.names = F) [1] 1 NA 3 4 5 NA NA 8 9 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: We can use `unlist` and create a single column `data.frame` ``` df1 <- data.frame(col =unlist(df)) ``` Upvotes: 1 <issue_comment>username_3: Just for fun. Of course `unlist` is the most appropriate function. 1. alternative `stack(df)[,1]` 2. alternative `do.call(c,df)` `do.call(c,c(df,use.names=F)) #unnamed version` Maybe they are more mysterious. Upvotes: 1 <issue_comment>username_4: Here's a couple ways with `purrr`: ``` # using invoke, a wrapper around do.call purrr::invoke(c, df, use.names = FALSE) # similar to unlist, reduce list of lists to a single vector purrr::flatten_dbl(df) ``` Both return: ``` [1] 1 NA 3 4 5 NA NA 8 9 ``` Upvotes: 3
2018/03/14
463
1,690
<issue_start>username_0: I'm trying to use a select element with mode="multiple". I'd like for input to be disabled, meaning that a user can only choose between existing options, not enter text. How do I do this? My element: ``` import { Select } from 'antd'; import 'antd/dist/antd.css'; const { Option, OptGroup } = Select; this.setState({ yield: value })} mode="multiple" maxTagCount={0} maxTagPlaceholder="Yield metrics"> Current Yield Grower Average Variety Potential All growers' average ```<issue_comment>username_1: Unfortunately in v3.3 there is no way to hide the search input of Select in `multiple` mode. We can set the input `maxlength` to zero and get the wanted result. The offering solution is kind of a hack and I don't like it personally but I couldn't find any better solution. I tried to hide the input using css but that prevents to close the drop-list because the input is used as a trigger for closing the list on focus lost event. ``` class TagSelect extends Select { disableInput() { const selects = document.getElementsByClassName(`ant-select-search__field`) for (let el of selects) { el.setAttribute(`maxlength`, 0) } } componentDidMount() { this.disableInput() } } ReactDOM.render( Current Yield Grower Average Variety Potential All growers' average , document.getElementById("container") ) ``` The working demo you can check [here](https://codesandbox.io/s/5kvo1m11wk). Upvotes: 3 [selected_answer]<issue_comment>username_2: This is a kind of hack in ant selection component. (using css) Ant Version: 3.26.6 ``` .my_select_component { .ant-select-search__field { display: none; } ``` Upvotes: -1
2018/03/14
338
1,347
<issue_start>username_0: I am trying to use aws appsync api (StartSchemaCreation) to create schema of a new graphql api with the schema of an existing graphql api, that I dumped with GetIntrospectionSchema api of aws appsync. But the --definition param of StartSchemaCreation requires me to provide a blob of graphql schema to create in the new api. I have my graphql schema in .json and .graphql files, but I cannot use them directly, as it gives error "Failed to parse schema document - ensure it's a valid SDL-formatted document." I need help understanding how can I pass my graphql schema through --definition param of start-schema-creation. I am using aws-cli StartSchemaCreation.<issue_comment>username_1: You can use ``` aws appsync start-schema-creation \ --api-id \ --definition file:// ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: It looks like the definition parameter is expected to be a base64 encoded string of the schema. This doesn't appear to be documented anywhere in the AWS CLI docs, but I found this tidbit [here](https://docs.aws.amazon.com/appsync/latest/APIReference/API_StartSchemaCreation.html#appsync-StartSchemaCreation-request-definition) Which led me to try this command which worked: ``` aws appsync start-schema-creation --api-id --definition $(base64 /path/to/schema.graphql) ``` Upvotes: 0
2018/03/14
1,070
4,189
<issue_start>username_0: I came across a piece of code where two methods have very similar functionalities, return the same type, but are different. ``` private Set extractDeviceInfo(List devices){ Set sets= new HashSet<>(); for(Device item:items){ sets.add(item.getDeviceName()); } return sets; } private Set extractDeviceInfoFromCustomer(List customers){ Set sets= new HashSet<>(); for (Customer c : customers) { sets.add(c.getDeviceName()); } return sets; } ``` As you can see from the code above, both methods are returning the same Set and retrieving the same data. I'm trying to attempt to create a generic method out of it and did some research but couldn't find anything that could solve this issue. If I understand this correctly, using generics, I can define generic parameters in the method and then pass parameters as well as the class type when calling the method. However I am not sure what to do after wards. For example, the method **getDeviceName()** how can I call it out of a generic class as the compiler doesn't know whether the generic class has that method or not. I will really appreciate if someone could tell me whether this is possible and how to achieve the desired result. Thanks UPDATE: Creating an interface and then having implementation looks like a good solution but I feel like it's overdoing when it comes to refactoring a couple of methods to avoid boiler plate. I've noticed that Generic classes can be passed as a parameter and the have methods like **getMethod()** etc. I was wondering if it was possible to create a generic method where you pass the class as well as the method name and then the method resolves that at runtime eg. ``` private Set genericMethod(Class clazz, String methodName ){ clazz.resolveMethod(methodName); } ``` So basically, I could do this when calling the method: ``` genericMethod(Customer.class,"gedDeviceInfo"); ``` I believe there's one language where this was achievable but not sure if you can do it in Java, although, a few years back I remember reading about resolving string into java code so they get compiled at runtime.<issue_comment>username_1: Both `Device` and `Customer` should implement the same interface where the method `getDeviceName` is defined: ``` interface Marker { String getDeviceName(); } class Device implements Marker { ... } class Customer implements Marker { ... } ``` I named it `Marker`, but it's up to you to name it reasonably. Then, the method might look like: ``` private Set extractDeviceInfo(List extends Marker markers) { return markers.stream().map(Marker::getDeviceName).collect(Collectors.toSet()); } ``` It allows the next type variations: ``` extractDeviceInfo(new ArrayList()); extractDeviceInfo(new ArrayList()); extractDeviceInfo(new ArrayList()); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: 99% of the time [Andrew answer](https://stackoverflow.com/a/49278503/4391450) is the solution. But, another approach is to define the function in parameter. This can be useful for some reporting or if you need to be able to extract values from an instance in multiple ways using the same method. ``` public static Set extractInfo(List data, Function function){ return data.stream().map(function).collect(Collectors.toSet()); } ``` Example : ``` public class Dummy{ private String a; private long b; public Dummy(String a, long b){ this.a = a; this.b = b; } public String getA(){return a; } public long getB(){return b; } } List list = new ArrayList<>(); list.add(new Dummy("A1", 1)); list.add(new Dummy("A2", 2)); list.add(new Dummy("A3", 3)); Set setA = extractInfo(list, Dummy::getA); // A1, A2, A3 Set setB = extractInfo(list, Dummy::getB); // 1, 2, 3 ``` Upvotes: 1 <issue_comment>username_3: using reflection in java will take a performance hit. in your case, it's probably not worth it. There is nothing wrong with your original code, if there are less than 3 places using it, DO NOT refactor. If there is more than 3 places and expecting more coming, you can refactor using @andrew's method. you should not refactor code just for the sake of refactoring in my opinion. Upvotes: 0
2018/03/14
3,483
14,352
<issue_start>username_0: I want to test a create method of my project, but this create method has 3 steps in my form and I want to test all of them. To test each step I need to send a create request with their respective params of the step. The problem is: I am repeating many params in each step, I want to know how can I put the common params in a method and then just call it. Here is my rspec file ``` require 'rails_helper' RSpec.describe Api::MenteeApplicationsController, type: :controller do describe "Api Mentee Application controller tests" do let(:edition) { create(:edition) } it 'should start create a Mentee Application, step 1' do edition post :create, application: { first_name: "Mentee", last_name: "Rspec", email: "<EMAIL>", gender: "female", country: "IN", program_country: "IN", time_zone: "5 - Mumbai", communicating_in_english: "true", send_to_mentor_confirmed: "true", time_availability: 3, previous_programming_experience: "false" }, step: "1", steps: "3" expect(response).to have_http_status(200) end it 'should continue to create a Mentee Application, step 2' do post :create, application: { first_name: "Mentee", last_name: "Rspec", email: "<EMAIL>", gender: "female", country: "IN", program_country: "IN", time_zone: "5 - Mumbai", communicating_in_english: "true", send_to_mentor_confirmed: "true", time_availability: 3, motivation: "Motivation", background: "Background", team_work_experience: "Team Work Experience", previous_programming_experience: "false" }, step: "2", steps: "3" expect(response).to have_http_status(200) end it 'should not create a Mentee Application in api format' do applications = MenteeApplication.count post :create, application: { first_name: "Mentee", last_name: "Rspec", email: "<EMAIL>", gender: "female", country: "IN", program_country: "IN", time_zone: "5 - Mumbai", communicating_in_english: "true", send_to_mentor_confirmed: "true", motivation: "Motivation", background: "Background", team_work_experience: "Team Work Experience", previous_programming_experience: "false", experience: "", operating_system: "mac_os", project_proposal: "Project Proposal", roadmap: "Roadmap", time_availability: 3, engagements: ["master_student", "part_time", "volunteer", "one_project"] }, step: "3", steps: "3" expect(response).to have_http_status(:unprocessable_entity) expect(MenteeApplication.count).to be(0) end it 'should create a Mentee Application in api format (step 3)' do applications = MenteeApplication.count post :create, application: { first_name: "Mentee", last_name: "Rspec", email: "<EMAIL>", gender: "female", country: "IN", program_country: "IN", time_zone: "5 - Mumbai", communicating_in_english: "true", send_to_mentor_confirmed: "true", motivation: "Motivation", background: "Background", programming_language: "ruby", team_work_experience: "Team Work Experience", previous_programming_experience: "false", experience: "", operating_system: "mac_os", project_proposal: "Project Proposal", roadmap: "Roadmap", time_availability: 3, engagements: ["master_student", "part_time", "volunteer", "one_project"] }, step: "3", steps: "3" expect(response).to have_http_status(200) expect(MenteeApplication.count).to be(applications+1) expect(flash[:notice]).to eq("Thank you for your application!") end end end ``` As you can see, the params in step 1 are used in steps 2 and 3, so I was thinking in something like this: ``` def some_params params.require(:application).permit(first_name: "Mentee", last_name: "Rspec", email: "<EMAIL>", gender: "female", country: "IN", program_country: "IN", time_zone: "5 - Mumbai", communicating_in_english: "true", send_to_mentor_confirmed: "true", time_availability: 3, previous_programming_experience: "false") end ``` But didn't work, how can I do that?<issue_comment>username_1: `let` blocks allow you to define variables for using within the tests cases (`it`s). Some key points to be aware of: * They are lazily evaluated: code within the block is not run until you call the variable (unless you use a bang -- `let!` -- which forces the evaluation) * They might be overridden within inner `context`s Head to [RSpec docs](https://relishapp.com/rspec/rspec-core/v/3-7/docs/helper-methods/let-and-let) to know more about them. --- The code you provided could make use of `let`s just like this: ``` require 'rails_helper' RSpec.describe Api::MenteeApplicationsController, type: :controller do describe "Api Mentee Application controller tests" do let(:edition) { create(:edition) } let(:first_step_params) do { first_name: 'Mentee', last_name: 'Rspec', #... previous_programming_experience: false, } end let(:second_step_params) do { motivation: "Motivation", background: "Background", team_work_experience: "Team Work Experience", }.merge(first_step_params) end let(:third_step_params) do { operating_system: "mac_os", project_proposal: "Project Proposal", roadmap: "Roadmap", time_availability: 3, engagements: ["master_student", "part_time", "volunteer", "one_project"], }.merge(third_step_params) end it 'should start create a Mentee Application, step 1' do edition post :create, application: first_step_params, step: "1", steps: "3" expect(response).to have_http_status(200) end it 'should continue to create a Mentee Application, step 2' do post :create, application: second_step_params, step: "2", steps: "3" expect(response).to have_http_status(200) end it 'should not create a Mentee Application in api format' do applications = MenteeApplication.count post :create, application: third_step_params, step: "3", steps: "3" expect(response).to have_http_status(:unprocessable_entity) expect(MenteeApplication.count).to be(0) end end end ``` Additional suggestions ---------------------- ### 1. Do not implement controller specs Controllers are meant to be a thin software layer between the user interface and background services. Their tests can hardly be acknowledged as integration (end-to-end) nor unit tests. I'd suggest you to implement feature specs instead. ([capybara](https://github.com/teamcapybara/capybara/#using-capybara-with-rspec) is a great match for Rails testing with RSpec) This [blog post](https://medium.com/table-xi/whats-up-with-rails-controller-tests-f0ece1fdd9f0) might provide more insights on this. ### 2. Do not use should in your test cases descriptions See [betterspecs.org](http://www.betterspecs.org/#should). ### 3. Mind the last trailing comma in ``` let(:application_params) do { first_name: 'Mentee', last_name: 'Rspec', #... previous_programming_experience: false, } end ``` It prevents [incidental changes](https://www.rubytapas.com/2012/11/16/episode-024-incidental-change/). ### 4. Use a .rspec file With contents such as ``` --require rails_helper ``` So you don't need `require 'rails_helper'` on top of each spec file. ### 5. Use `context`s This is also a guidance from betterspecs.org. You could do something like ``` RSpec.describe Api::MenteeApplicationsController, type: :controller do describe "Api Mentee Application controller tests" do let(:edition) { create(:edition) } let(:application_params) do { #... } end let(:step) { 1 } it 'should start create a Mentee Application' do edition post :create, application: application_params, step: step, steps: "3" expect(response).to have_http_status(200) end context 'in second step' do let(:step) { 2 } it 'should continue to create a Mentee Application' do post :create, application: application_params, step: step, steps: "3" expect(response).to have_http_status(200) end end end end ``` `context`s might also be handy for handling additional params: ``` RSpec.describe Api::MenteeApplicationsController, type: :controller do describe "Api Mentee Application controller tests" do let(:edition) { create(:edition) } let(:application_params) do common_params.merge(additional_params) end let(:commom_params) do { #... } end let(:additional_params) { {} } it 'creates an application' do post :create, application: application_params end context 'with API params' do let(:additional_params) do { #... } end it 'creates an application' do post :create, application: application_params end end end end ``` Note that the `post` method call became exactly the same in both contexts. This would allow for reusing it (in a `before` block or even another `let` block). Upvotes: 3 [selected_answer]<issue_comment>username_2: I think I would be tempted to do it something like below. Essentially: 1. Create a memoized variable called `@full_application` and wrap it in a method (I've done this at the bottom of the test). 2. Create constants stipulating the subsets of the values that you want for each test, such as `STEP_ONE_PARAMS`, `STEP_TWO_PARAMS`, etc. 3. In each `it` block, use `.slice` and the constants defined above to "grab" the values from `full_application` that you want to use. Something like this: ``` require 'rails_helper' RSpec.describe Api::MenteeApplicationsController, type: :controller do STEP_ONE_PARAMS = %w( first_name last_name email gender country communicating_in_english send_to_mentor_confirmed time_availability previous_programming_experience ).freeze STEP_TWO_PARAMS = STEP_ONE_PARAMS.dup.concat(%w( motivation background team_work_experience )).freeze STEP_THREE_PARAMS = STEP_TWO_PARAMS.dup.concat(%w( operating_system project_proposal roadmap engagements )).freeze describe "Api Mentee Application controller tests" do let(:edition) { create(:edition) } it 'should start create a Mentee Application, step 1' do edition post :create, application: full_application.slice(*STEP_ONE_PARAMS), step: "1", steps: "3" expect(response).to have_http_status(200) end it 'should continue to create a Mentee Application, step 2' do post :create, application: full_application.slice(*STEP_TWO_PARAMS), step: "2", steps: "3" expect(response).to have_http_status(200) end it 'should not create a Mentee Application in api format' do applications = MenteeApplication.count post :create, application: full_application.slice(*STEP_THREE_PARAMS), step: "3", steps: "3" expect(response).to have_http_status(:unprocessable_entity) expect(MenteeApplication.count).to be(0) end it 'should create a Mentee Application in api format (step 3)' do applications = MenteeApplication.count post :create, application: full_application, step: "3", steps: "3" expect(response).to have_http_status(200) expect(MenteeApplication.count).to be(applications+1) expect(flash[:notice]).to eq("Thank you for your application!") end end end def full_application @full_application ||= { first_name: "Mentee", last_name: "Rspec", email: "<EMAIL>", gender: "female", country: "IN", program_country: "IN", time_zone: "5 - Mumbai", communicating_in_english: "true", send_to_mentor_confirmed: "true", motivation: "Motivation", background: "Background", programming_language: "ruby", team_work_experience: "Team Work Experience", previous_programming_experience: "false", experience: "", operating_system: "mac_os", project_proposal: "Project Proposal", roadmap: "Roadmap", time_availability: 3, engagements: [ "master_student", "part_time", "volunteer", "one_project" ] } end ``` Upvotes: 0
2018/03/14
1,976
6,994
<issue_start>username_0: I'm using this method in a fragment to compress an image, if I'm not mistaken and then upload it to google firebase server: ``` Bitmap thumb_bitmap = new Compressor(this.getActivity()) .setMaxWidth(200) .setMaxHeight(200) .setQuality(75) .compressToBitmap(thumb_filePath); ``` I end up getting the following error from Android Studio: `java.lang.NullPointerException: Attempt to invoke virtual method 'boolean android.graphics.Bitmap.compress(android.graphics.Bitmap$CompressFormat, int, java.io.OutputStream)' on a null object reference` This is the rest of the code for the method: ``` private void uploadImage() { if (filePath[0] != null && filePath[1] != null) { final ProgressDialog progressDialog = new ProgressDialog(getActivity()); //progressDialog.setTitle("Uploading..."); //progressDialog.show(); for ( Uri path : filePath ) { id_or_proof += 1; File thumb_filePath = new File(path.getPath()); Log.d ( "THUMB FILE PATH", path.getPath() ); String current_user_id = mCurrentUser.getUid(); Bitmap thumb_bitmap = new Compressor(this.getActivity()) .setMaxWidth(200) .setMaxHeight(200) .setQuality(75) .compressToBitmap(thumb_filePath); Log.d ( "BITMAP", String.valueOf(thumb_bitmap)); ByteArrayOutputStream baos = new ByteArrayOutputStream(); thumb_bitmap.compress(Bitmap.CompressFormat.JPEG, 100, baos); final byte[] thumb_byte = baos.toByteArray(); StorageReference filepath = mImageStorage.child("profile_images").child(current_user_id + ".jpg"); final StorageReference thumb_filepath = mImageStorage.child("profile_images").child("thumbs").child(current_user_id + ".jpg"); filepath.putFile(path).addOnCompleteListener(new OnCompleteListener() { @Override public void onComplete(@NonNull Task task) { if(task.isSuccessful()){ final String download\_url = task.getResult().getDownloadUrl().toString(); UploadTask uploadTask = thumb\_filepath.putBytes(thumb\_byte); uploadTask.addOnCompleteListener(new OnCompleteListener() { @Override public void onComplete(@NonNull Task thumb\_task) { String thumb\_downloadUrl = thumb\_task.getResult().getDownloadUrl().toString(); if(thumb\_task.isSuccessful()){ if ( id\_or\_proof == 1 ) { Map update\_hashMap = new HashMap(); update\_hashMap.put("id\_image", download\_url); update\_hashMap.put("thumb\_id\_image", thumb\_downloadUrl); mUserDatabase.updateChildren(update\_hashMap).addOnCompleteListener(new OnCompleteListener() { @Override public void onComplete(@NonNull Task task) { if (task.isSuccessful()) { mProgressDialog.dismiss(); Toast.makeText(getActivity(), "Success Uploading.", Toast.LENGTH\_LONG).show(); } } }); } if ( id\_or\_proof == 2 ) { Map update\_hashMap = new HashMap(); update\_hashMap.put("proof\_image", download\_url); update\_hashMap.put("thumb\_proof\_image", thumb\_downloadUrl); mUserDatabase.updateChildren(update\_hashMap).addOnCompleteListener(new OnCompleteListener() { @Override public void onComplete(@NonNull Task task) { if (task.isSuccessful()) { mProgressDialog.dismiss(); Toast.makeText(getActivity(), "Success Uploading.", Toast.LENGTH\_LONG).show(); } } }); } } else { Toast.makeText(getActivity(), "Error in uploading thumbnail.", Toast.LENGTH\_LONG).show(); mProgressDialog.dismiss(); } } }); } else { Toast.makeText(getActivity(), "Error in uploading.", Toast.LENGTH\_LONG).show(); mProgressDialog.dismiss(); } } }); }// end of for }// end of if () } // end of uploadImage() ``` I need help trying to figure out why I'm getting the error that I'm getting and also how I can fix it. Much Appreciated. Here's the logcat: `03-14 15:36:57.383 2780-2780/in.tvac.akshaye.lapitchat E/BitmapFactory: Unable to decode stream: java.io.FileNotFoundException: /document/60 (No such file or directory) 03-14 15:36:57.383 2780-2780/in.tvac.akshaye.lapitchat E/BitmapFactory: Unable to decode stream: java.io.FileNotFoundException: /document/60 (No such file or directory) 03-14 15:36:57.383 2780-2780/in.tvac.akshaye.lapitchat D/BITMAP: null 03-14 15:36:57.383 2780-2780/in.tvac.akshaye.lapitchat D/AndroidRuntime: Shutting down VM 03-14 15:36:57.384 2780-2780/in.tvac.akshaye.lapitchat E/AndroidRuntime: FATAL EXCEPTION: main Process: in.tvac.akshaye.lapitchat, PID: 2780 java.lang.NullPointerException: Attempt to invoke virtual method 'boolean android.graphics.Bitmap.compress(android.graphics.Bitmap$CompressFormat, int, java.io.OutputStream)' on a null object reference at in.tvac.akshaye.lapitchat.PersonalInforamtionFragment.uploadImage(PersonalInforamtionFragment.java:649) at in.tvac.akshaye.lapitchat.PersonalInforamtionFragment.access$1900(PersonalInforamtionFragment.java:57) at in.tvac.akshaye.lapitchat.PersonalInforamtionFragment$8.onComplete(PersonalInforamtionFragment.java:767) at com.google.android.gms.tasks.zzc$1.run(Unknown Source:23) at android.os.Handler.handleCallback(Handler.java:790) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:164) at android.app.ActivityThread.main(ActivityThread.java:6494) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:438) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:807)`<issue_comment>username_1: Use correct context ``` Bitmap thumb_bitmap = new Compressor(this.getActivity()) .setMaxWidth(200) .setMaxHeight(200) .setQuality(75) .compressToBitmap(thumb_filePath); ``` Replace it with this ``` Bitmap thumb_bitmap = new Compressor(getActivity()) .setMaxWidth(200) .setMaxHeight(200) .setQuality(75) .compressToBitmap(thumb_filePath); ``` **Also the path should be correct as well** Upvotes: 0 <issue_comment>username_2: It seems you are passing the wrong Uri to the new File, so it is not finding your file. **Get from External Storage.** If you are trying to get the file from the External Storage you have to add this permission: ``` ``` Then create the path like this: ``` String filePath = Environment.getExternalStorageDirectory() + "/document/60"; Uri path = Uri.parse(filePath); ``` **Get from app storage** If you are trying to get the file from App Storage, just build your Uri like this: ``` String filePath = getActivity().getFilesDir() + "/document/60"; Uri path = Uri.parse(filePath); ``` Upvotes: 2
2018/03/14
805
2,924
<issue_start>username_0: I am trying to write a code for squaring the user input number in Python. I've created function my1() ... What I want to do is to make Python to take user input of a number and square it but if user added no value it gives a print statement and by default give the square of a default number for e.g 2 Here is what I've tried so far ``` def my1(a=4): if my1() is None: print('You have not entered anything') else: b=a**2 print (b) my1(input("Enter a Number")) ```<issue_comment>username_1: This is a better solution: ``` def my1(a=4): if not a: return 'You have not entered anything' else: try: return int(a)**2 except ValueError: return 'Invalid input provided' my1(input("Enter a Number")) ``` **Explanation** * Have your function `return` values, instead of simply printing. This is good practice. * Use `if not a` to test if your string is empty. This is a Pythonic idiom. * Convert your input string to numeric data, e.g. via `int`. * Catch `ValueError` and return an appropriate message in case the user input is invalid. Upvotes: 2 <issue_comment>username_2: In your second line, it should be if a is None: I think what you want to do is something like the following: ``` def m1(user_input=None): if user_input is None or isinstance(user_input, int): print("Input error!") return 4 else: return int(user_input)**2 print(my1(input("Input a number"))) ``` Upvotes: 0 <issue_comment>username_3: You're getting an infinite loop by calling my1() within my1(). I would make the following edits: ``` def my1(a): if a is '': print('You have not entered anything') else: b=int(a)**2 print (b) my1(input("Enter a Number")) ``` Upvotes: 1 [selected_answer]<issue_comment>username_4: When I read your code, I can see that you are very confused about what you are writing. Try to organize your mind around the **tasks** you'll need to perform. Here, you want to : 1. Receive your user inputs. 2. Compute the data. 3. Print accordingly. First, take your input. ``` user_choice = input("Enter a number :") ``` Then, compute the data you received. ``` my1(user_choice) ``` You want your function, as of now, to `print an error message if your type data is not good`, else print the squared number. ``` def my1(user_choice): # Always give meaning to the name of your variables. if not user_choice: print 'Error' else: print user_choice ** 2 ``` Here, you are basically saying "If my user\_choice doesn't exists...". Meaning it equals `False` (it is a bit more complicated than this, but in short, you need to remember this). An empty string doesn't contain anything for instance. The other choice, `else`, is if you handled your error case, then your input must be right, so you compute your data accordingly. Upvotes: 0
2018/03/14
394
1,374
<issue_start>username_0: I want to set the condition which shows all vehicles where the `title_recieved` is `null`. ``` ->andFilterWhere(['=', 'tr.title_recieved', null]) ->andFilterWhere(['is', 'tr.title_recieved', null]) ->andFilterWhere(['is', [ 'tr.title_recieved', null]]) ``` I've tried all the available options, the is `null` condition works in `andWhere`, but not in `andFilterWhere`.<issue_comment>username_1: Use andWhere on query liek this. ``` ->andWhere(['tr.title_recieved' => null]); ``` Upvotes: 3 <issue_comment>username_2: Try With This : ``` ->andFilterWhere('tr.title_recieved is NULL'); ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: \*\*As per yii2 doc andFilterWhere() Adds an additional WHERE condition to the existing one but ignores empty operands. The new condition and the existing one will be joined using the 'AND' operator. This method is similar to andWhere(). The main difference is that this method will remove empty query operands. As a result, this method is best suited for building query conditions based on filter values entered by users. From doc <http://www.yiiframework.com/doc-2.0/yii-db-querytrait.html#andFilterWhere()-detail> \*\* Upvotes: 1 <issue_comment>username_4: It can be done like this ``` $query->andFilterWhere(['IS', 'tr.title_recieved', new \yii\db\Expression('NULL')]); ``` Upvotes: 1
2018/03/14
789
2,386
<issue_start>username_0: I use puTTy to connect to a remote server running Linux. When I run ``` abc@myName((/home/myName)$java -version ``` I get the following ``` java version "1.7.0_80" Java(TM) SE Runtime Environment (build 1.7.0_80-b15) Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode) ``` Then I used `readlink -f $(which java)` to find the location of the java command and I got the location as `/opt/jdk1.7.0_80/bin/java`. Now I navigated to this location and listed the files ``` abc@myName(/opt/jdk1.7.0_80/bin)$ls appletviewer idlj javac javap jconsole jinfo jps jstat native2ascii rmic serialver wsgen apt jar javadoc java-rmi.cgi jcontrol jmap jrunscript jstatd orbd rmid servertool wsimport ControlPanel jarsigner javafxpackager javaws jdb jmc jsadebugd jvisualvm pack200 rmiregistry tnameserv xjc extcheck java javah jcmd jhat jmc.ini jstack keytool policytool schemagen unpack200 ``` Then I tried the following ``` abc@myName(/opt/jdk1.7.0_80/bin)$javac ``` And got ``` -bash: javac: command not found ``` Could someone help me with this?<issue_comment>username_1: The JDK folder you specified is not in your PATH. The current directory is not in your PATH either. Option 1. ``` cd /opt/jdk1.7.0_80/bin ./javac ``` That is using the local path. Option 2. ``` /opt/jdk1.7.0_80/bin/javac ``` That is using the full path. Option 3. ``` export PATH=$PATH:/opt/jdk1.7.0_80/bin javac ``` That is adding the folder to your PATH. Upvotes: 3 [selected_answer]<issue_comment>username_2: This is a $PATH issue. $PATH is an environment variable that contains a list of directories to search when looking for an executable Try to exceute this command: export PATH=/opt/jdk1.7.0\_80/bin:$PATH Upvotes: 2 <issue_comment>username_3: In terminal, type the command `javac -version` Does it yield the following message? > > > ``` > The command «javac» was not found, but it can be installed with: > > apt install default-jdk > apt install openjdk-11-jdk-headless > apt install ecj > apt install openjdk-8-jdk-headless > > ``` > > If so, use `apt install default-jdk` and javac will be working again. Upvotes: 2
2018/03/14
1,190
4,512
<issue_start>username_0: I'm currently developing a Serverless App with AWS. I want to subscribe to a topic using plain JavaScript (No Node.js, React, Angular etc.) The IoT and IoTData SDK's doesn't support a "subscribe to topic" function. To achieve this, i need to implement the `aws-iot-device` sdk, via `require('aws-iot-device')` (which i can't use in plain JS). Unfortunatly this SDK only works with runtimes like Node.js or Browserify. So how can someone subscribe to a topic from browser? Is there a way to implement the SDK into plain JS? Thanks in advance<issue_comment>username_1: You can use [paho js](https://www.eclipse.org/paho/clients/js/) or [mqttjs](https://github.com/mqttjs/MQTT.js#browser) in the browser. The aws-iot-device sdk for javascript is a wrapper around mqttjs. Upvotes: 1 <issue_comment>username_2: This is how its done, works perfectly fine: ``` ``` cp this libaries into your html. ``` function SigV4Utils(){} SigV4Utils.sign = function(key, msg) { var hash = CryptoJS.HmacSHA256(msg, key); return hash.toString(CryptoJS.enc.Hex); }; SigV4Utils.sha256 = function(msg) { var hash = CryptoJS.SHA256(msg); return hash.toString(CryptoJS.enc.Hex); }; SigV4Utils.getSignatureKey = function(key, dateStamp, regionName, serviceName) { var kDate = CryptoJS.HmacSHA256(dateStamp, 'AWS4' + key); var kRegion = CryptoJS.HmacSHA256(regionName, kDate); var kService = CryptoJS.HmacSHA256(serviceName, kRegion); var kSigning = CryptoJS.HmacSHA256('aws4_request', kService); return kSigning; }; function createEndpoint(regionName, awsIotEndpoint, accessKey, secretKey) { var time = moment.utc(); var dateStamp = time.format('YYYYMMDD'); var amzdate = dateStamp + 'T' + time.format('HHmmss') + 'Z'; var service = 'iotdevicegateway'; var region = regionName; var secretKey = secretKey; var accessKey = accessKey; var algorithm = 'AWS4-HMAC-SHA256'; var method = 'GET'; var canonicalUri = '/mqtt'; var host = awsIotEndpoint; var credentialScope = dateStamp + '/' + region + '/' + service + '/' + 'aws4_request'; var canonicalQuerystring = 'X-Amz-Algorithm=AWS4-HMAC-SHA256'; canonicalQuerystring += '&X-Amz-Credential=' + encodeURIComponent(accessKey + '/' + credentialScope); canonicalQuerystring += '&X-Amz-Date=' + amzdate; canonicalQuerystring += '&X-Amz-SignedHeaders=host'; var canonicalHeaders = 'host:' + host + '\n'; var payloadHash = SigV4Utils.sha256(''); var canonicalRequest = method + '\n' + canonicalUri + '\n' + canonicalQuerystring + '\n' + canonicalHeaders + '\nhost\n' + payloadHash; var stringToSign = algorithm + '\n' + amzdate + '\n' + credentialScope + '\n' + SigV4Utils.sha256(canonicalRequest); var signingKey = SigV4Utils.getSignatureKey(secretKey, dateStamp, region, service); var signature = SigV4Utils.sign(signingKey, stringToSign); canonicalQuerystring += '&X-Amz-Signature=' + signature; canonicalQuerystring += '&X-Amz-Security-Token=' + encodeURIComponent(AWS.config.credentials.sessionToken); return 'wss://' + host + canonicalUri + '?' + canonicalQuerystring; } var endpoint = createEndpoint( 'eu-central-1', // YOUR REGION 'xxxxxx.iot.eu-central-1.amazonaws.com', // YOUR IoT ENDPOINT accesskey, // YOUR ACCESS KEY secretkey); // YOUR SECRET ACCESS KEY var clientId = Math.random().toString(36).substring(7); var client = new Paho.MQTT.Client(endpoint, clientId); var connectOptions = { useSSL: true, timeout: 3, mqttVersion: 4, onSuccess: subscribe }; client.connect(connectOptions); client.onMessageArrived = onMessage; client.onConnectionLost = function(e) { console.log(e) }; function subscribe() { client.subscribe("my/things/something"); console.log("subscribed"); } function onMessage(message) { var status = JSON.parse(message.payloadString); } ``` With this code, you can subscribe to IoT Topics in plain client side JavaScript. No Node.js, React.js or similar is needed! Upvotes: 4 [selected_answer]
2018/03/14
794
2,769
<issue_start>username_0: I used an `XMLHttpRequest` object to retrieve data from a PHP response. Then, I created an XML file: ``` xml version="1.0" encoding="UTF-8"? Ce male 24 Lin female 25 ``` In the PHP file, I load the XML file and try to echo tag values of "name." ``` $dom = new DOMDocument("1.0"); $dom -> load("test.xml"); $persons = $dom -> getElementsByTagName("person"); foreach($persons as $person){ echo $person -> childNodes -> item(0) -> nodeValue; } ``` But the `nodeValue` returned is `null`. However, when I change to `item(1)`, the name tag values can be displayed. Why?<issue_comment>username_1: Change code to ``` $dom = new DOMDocument("1.0"); $dom -> load("test.xml"); $persons = $dom -> getElementsByTagName("persons"); foreach($persons as $person){ echo $person->childNodes[1]->nodeValue; } ``` Upvotes: 0 <issue_comment>username_2: Using DOM you need to get the right element to pick up the name, child nodes include all sorts of things including whitespace. The node 0 your trying to use is null because of this. So for DOM... ``` $dom = new DOMDocument("1.0"); $dom -> load("test.xml"); $persons = $dom -> getElementsByTagName("person"); foreach($persons as $person){ $name = $person->getElementsByTagName("name"); echo $name->item(0)->nodeValue.PHP_EOL; } ``` If your requirements are as simple as this, you could alternatively use SimpleXML... ``` $sxml = simplexml_load_file("test.xml"); foreach ( $sxml->person as $person ) { echo $person->name.PHP_EOL; } ``` This allows you to access elements as though they are object properties and as you can see `->person` equates to accessing . Upvotes: -1 [selected_answer]<issue_comment>username_3: Anything in a DOM is a node, include texts and text with only whitespaces. So the first child of the `person` element node is a text node that contains the linebreak and indent before the `name` element node. Here is a property that removes any whitespace node at parse time: ``` $document = new DOMDocument("1.0"); // do not preserve whitespace only text nodes $document->preserveWhiteSpace = FALSE; $document->load("test.xml"); $persons = $document->getElementsByTagName("person"); foreach ($persons as $person) { echo $person->firstChild->textContent; } ``` However typically a better way is to use Xpath expressions. ``` $document = new DOMDocument("1.0"); $document->load("test.xml"); $xpath = new DOMXpath($document) $persons = $xpath->evaluate("/persons/person"); foreach ($persons as $person) { echo $xpath->evaluate("string(name)", $person); } ``` `string(name)` fetches the child element node `name` (position is not relevant) and casts it into a string. If here is no `name` element it will return an empty string. Upvotes: 0
2018/03/14
1,009
2,791
<issue_start>username_0: Hey so I have arduino uno and a sim808 with gps antenna and gsm antenna. So here's the sample code: ``` #include #include #define PIN\_TX 3 #define PIN\_RX 4 SoftwareSerial mySerial(PIN\_TX,PIN\_RX); //DFRobot\_SIM808 sim808(&mySerial);//Connect RX,TX,PWR, DFRobot\_SIM808 sim808(&mySerial); void setup() { //mySerial.begin(9600); Serial.begin(9600); //\*\*\*\*\*\*\*\* Initialize sim808 module \*\*\*\*\*\*\*\*\*\*\*\*\* while(!sim808.init()) { delay(1000); Serial.print("Sim808 init error\r\n"); } //\*\*\*\*\*\*\*\*\*\*\*\*\* Turn on the GPS power\*\*\*\*\*\*\*\*\*\*\*\* if( sim808.attachGPS()) Serial.println("Open the GPS power success"); else Serial.println("Open the GPS power failure"); } void loop() { //\*\*\*\*\*\*\*\*\*\*\*\*\*\* Get GPS data \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* if (sim808.getGPS()) { Serial.print(sim808.GPSdata.year); Serial.print("/"); Serial.print(sim808.GPSdata.month); Serial.print("/"); Serial.print(sim808.GPSdata.day); Serial.print(" "); Serial.print(sim808.GPSdata.hour); Serial.print(":"); Serial.print(sim808.GPSdata.minute); Serial.print(":"); Serial.print(sim808.GPSdata.second); Serial.print(":"); Serial.println(sim808.GPSdata.centisecond); Serial.print("latitude :"); Serial.println(sim808.GPSdata.lat); Serial.print("longitude :"); Serial.println(sim808.GPSdata.lon); Serial.print("speed\_kph :"); Serial.println(sim808.GPSdata.speed\_kph); Serial.print("heading :"); Serial.println(sim808.GPSdata.heading); Serial.println(); //\*\*\*\*\*\*\*\*\*\*\*\*\* Turn off the GPS power \*\*\*\*\*\*\*\*\*\*\*\* sim808.detachGPS(); } } ``` So I'm always getting a result of "sim808 init error" [![here's the result](https://i.stack.imgur.com/Fq8Ow.png)](https://i.stack.imgur.com/Fq8Ow.png) I don't know what the problem is but I do hope that the sim808 isn not broken because it has light in STA(status) and in NET(network) that is slowly blinking but there's no light in PPS(gps) I don't know what's the problem I'm really really confused.<issue_comment>username_1: You must use pins 7 and 8 of the Arduino as Tx and Rx. With the 3 and 2 you have selected it will not work for you. Upvotes: 0 <issue_comment>username_2: ``` //mySerial.begin(9600); ``` This line just below the void setup must be a code part not comment, delete the ``` '//' mySerial.begin(9600); Serial.begin(9600); ``` Also 6th line ``` //DFRobot_SIM808 sim808(&mySerial);//Connect RX,TX,PWR, must be part of code not comment, delete the '//' ``` Try again as ``` DFRobot_SIM808 sim808(&mySerial);//Connect RX,TX,PWR, ``` It should work, since it is a cold start it might take time, if you had problem again correcting those comment lines into code, just swap the pins Upvotes: 1
2018/03/14
720
2,260
<issue_start>username_0: i have a graphical interface where a user could enter a datafilter as a string, like: ``` >= 10 <= 100 ``` I like to create an if-condition from this string. The code i currently got splits the string into a list: ``` import re def filter_data(string_filter) # \s actually not necessary, just for optical reason s_filter = r'(\s|>=|>|<=|<|=|OR|AND)' splitted_filter = re.split(s_filter, string_filter) splitted_filter = list(filter(lambda x: not (x.isspace()) and x, splitted_filter)) print(splitted_filter) ``` with the given filter string above, the output would be: ``` ['>=', '10', '<=', '100'] ``` I now like to use this to create the if-condition of it. My current idea would be to create nested if-statements. Do you see a better solution? Thanks!<issue_comment>username_1: Yes, you can string conditions together with `and`. Instead of ``` if x >= 10: if x <= 100: [do stuff] ``` You can generate ``` if x >= 10 and x <= 100: [do stuff] ``` Upvotes: -1 <issue_comment>username_2: Handle the operations with functions instead of control flow syntax constructs. For example: ``` from operator import ge binary_operations = { ">=": ge, ... } splitted_filter = ... x = ... result = True while result and splitted_filter: op = splitted_filter.pop(0) func = binary_operations[op] rhs = splitted_filter.pop(0) result = func(x, rhs): if result: # do stuff ``` Upvotes: 2 <issue_comment>username_3: I would probably create a dict of operator to function. For example: ``` operators = { '>=': lambda a, b: a >= b, '<=': lambda a, b: a <= b, } ``` Then you can start composing those functions together. First, to iterate by pairs: ``` def pairs(l): assert len(l) % 2 == 0 for i in range(0, len(l), 2): yield (l[i], l[i + 1]) ``` Now, apply that to your filter list and build a list of functions: ``` filters_to_apply = [] for op, value in pairs(splitted_filter): def filter(record): return operators[op](record, value) filters_to_apply.append(filter) ``` Finally, apply those filters to your data: ``` if all(f(record) for f in filters_to_apply): # Keep the current record ``` Upvotes: 0
2018/03/14
825
3,062
<issue_start>username_0: I am looking for a code snippet using which, I can enable/disable sidebar toggle button in shinydashboard header. ``` library(shiny) library(shinydashboard) library(shinyjs) ui <- shinyUI(dashboardPage( dashboardHeader(), dashboardSidebar(), dashboardBody( useShinyjs() ) )) server <- shinyServer(function(input, output, session) { addClass(selector = "body", class = "sidebar-collapse") # Hide Side Bar }) shinyApp(ui = ui, server = server) ``` Let me know if anybody can help???<issue_comment>username_1: I have found a solution to this...If someone is stuck with same problem, can refer to below solution: ``` library(shiny) library(shinydashboard) library(shinyjs) ui <- shinyUI(dashboardPage( dashboardHeader(), dashboardSidebar( tags$head( tags$script( HTML(#code for hiding sidebar tabs "Shiny.addCustomMessageHandler('manipulateMenuItem1', function(message) { var aNodeList = document.getElementsByTagName('a'); for (var i = 0; i < aNodeList.length; i++) { if(aNodeList[i].getAttribute('data-toggle') == message.toggle && aNodeList[i].getAttribute('role') == message.role) { if(message.action == 'hide') { aNodeList[i].setAttribute('style', 'display: none;'); } else { aNodeList[i].setAttribute('style', 'display: block;'); }; }; } });" ) ) ) ), dashboardBody( useShinyjs(), actionButton("h1","Hide toggle"), actionButton("h2","Show toggle") ) )) server <- shinyServer(function(input, output, session) { observeEvent(input$h1,{ session$sendCustomMessage(type = "manipulateMenuItem1", message = list(action = "hide",toggle = "offcanvas", role = "button")) }) observeEvent(input$h2,{ session$sendCustomMessage(type = "manipulateMenuItem1", message = list(action = "show",toggle = "offcanvas", role = "button")) }) }) shinyApp(ui = ui, server = server) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: If you use the `shinyjs` package, you can show or hide the sidebar toggle with a quick line of JavaScript. ``` library(shiny) library(shinydashboard) library(shinyjs) ui <- shinyUI(dashboardPage( dashboardHeader(), dashboardSidebar(), dashboardBody( useShinyjs(), actionButton("hide","Hide toggle"), actionButton("show","Show toggle") ) )) server <- shinyServer(function(input, output, session) { observeEvent(input$hide,{ shinyjs::runjs("document.getElementsByClassName('sidebar-toggle')[0].style.visibility = 'hidden';") }) observeEvent(input$show,{ shinyjs::runjs("document.getElementsByClassName('sidebar-toggle')[0].style.visibility = 'visible';") }) }) shinyApp(ui = ui, server = server) ``` The JavaScript itself just refers to the first element with class `sidebar-toggle` (i.e. the menu button), and toggles the visibility depending on which button the user presses. Upvotes: 3
2018/03/14
5,903
20,849
<issue_start>username_0: while iam trying to build my app some gradle updation was done dynamically and after that my build was failed, i searched on my issue but none was ove my error, after that i tried to create a new app and tried to implement app same as i had before, while implementing the new one, i need to install ionic native plugins [fileOpener](https://ionicframework.com/docs/native/file-opener/) and [geolocation](https://ionicframework.com/docs/native/geolocation/), so after installing those plugins, my build was failing with below error.. ``` :app:processDebugResources C:\Users\midhun\.gradle\caches\transforms-1\files-1.1\appcompat-v7-26.1.0.aar\056084560d1bc05d95277a5df08184ea\res\values\values.xml:246:5-69: AAPT: error: resource android:attr/fontVariationSettings not found. C:\Users\midhun\.gradle\caches\transforms-1\files-1.1\appcompat-v7-26.1.0.aar\056084560d1bc05d95277a5df08184ea\res\values\values.xml:246:5-69: AAPT: error: resource android:attr/ttcIndex not found. D:\Tineri-v3\platforms\android\app\build\intermediates\incremental\mergeDebugResources\merged.dir\values\values.xml:223: error: resource android:attr/fontVariationSettings not found. D:\Tineri-v3\platforms\android\app\build\intermediates\incremental\mergeDebugResources\merged.dir\values\values.xml:223: error: resource android:attr/ttcIndex not found. error: failed linking references. Failed to execute aapt com.android.ide.common.process.ProcessException: Failed to execute aapt at com.android.builder.core.AndroidBuilder.processResources(AndroidBuilder.java:796) at com.android.build.gradle.tasks.ProcessAndroidResources.invokeAaptForSplit(ProcessAndroidResources.java:551) at com.android.build.gradle.tasks.ProcessAndroidResources.doFullTaskAction(ProcessAndroidResources.java:285) at com.android.build.gradle.internal.tasks.IncrementalTask.taskAction(IncrementalTask.java:109) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73) at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$IncrementalTaskAction.doExecute(DefaultTaskClassInfoStore.java:173) at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:134) at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:121) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$1.run(ExecuteActionsTaskExecuter.java:122) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:111) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:92) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:70) at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:63) at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54) at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58) at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:88) at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54) at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43) at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:248) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:241) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:230) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:124) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:80) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:105) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:99) at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:625) at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:580) FAILED 37 actionable tasks: 37 executed at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:99) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63) at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55) at java.lang.Thread.run(Thread.java:748) Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503) at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:482) at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79) at com.android.builder.core.AndroidBuilder.processResources(AndroidBuilder.java:794) ... 48 more Caused by: java.util.concurrent.ExecutionException: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503) at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:462) at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79) at com.android.builder.internal.aapt.v2.QueueableAapt2.lambda$makeValidatedPackage$1(QueueableAapt2.java:179) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more Caused by: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details at com.android.builder.png.AaptProcess$NotifierProcessOutput.handleOutput(AaptProcess.java:454) at com.android.builder.png.AaptProcess$NotifierProcessOutput.err(AaptProcess.java:411) at com.android.builder.png.AaptProcess$ProcessOutputFacade.err(AaptProcess.java:332) at com.android.utils.GrabProcessOutput$1.run(GrabProcessOutput.java:104) FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:processDebugResources'. > Failed to execute aapt * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. * Get more help at https://help.gradle.org BUILD FAILED in 1m 41s (node:8656) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: cmd: Command failed with exit code 1 Error output: Note: Some input files use or override a deprecated API. Note: Recompile with -Xlint:deprecation for details. C:\Users\midhun\.gradle\caches\transforms-1\files-1.1\appcompat-v7-26.1.0.aar\056084560d1bc05d95277a5df08184ea\res\values\values.xml:246:5-69: AAPT: error: resource android:attr/fontVariationSettings not found. C:\Users\midhun\.gradle\caches\transforms-1\files-1.1\appcompat-v7-26.1.0.aar\056084560d1bc05d95277a5df08184ea\res\values\values.xml:246:5-69: AAPT: error: resource android:attr/ttcIndex not found. D:\Tineri-v3\platforms\android\app\build\intermediates\incremental\mergeDebugResources\merged.dir\values\values.xml:223: error: resource android:attr/fontVariationSettings not found. D:\Tineri-v3\platforms\android\app\build\intermediates\incremental\mergeDebugResources\merged.dir\values\values.xml:223: error: resource android:attr/ttcIndex not found. error: failed linking references. Failed to execute aapt com.android.ide.common.process.ProcessException: Failed to execute aapt at com.android.builder.core.AndroidBuilder.processResources(AndroidBuilder.java:796) at com.android.build.gradle.tasks.ProcessAndroidResources.invokeAaptForSplit(ProcessAndroidResources.java:551) at com.android.build.gradle.tasks.ProcessAndroidResources.doFullTaskAction(ProcessAndroidResources.java:285) at com.android.build.gradle.internal.tasks.IncrementalTask.taskAction(IncrementalTask.java:109) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73) at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$IncrementalTaskAction.doExecute(DefaultTaskClassInfoStore.java:173) at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:134) at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:121) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$1.run(ExecuteActionsTaskExecuter.java:122) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:111) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:92) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:70) at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:63) at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54) at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58) at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:88) at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54) at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43) at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:248) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:241) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:230) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:124) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:80) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:105) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:99) at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:625) at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:580) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:99) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63) at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55) at java.lang.Thread.run(Thread.java:748) Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503) at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:482) at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79) at com.android.builder.core.AndroidBuilder.processResources(AndroidBuilder.java:794) ... 48 more Caused by: java.util.concurrent.ExecutionException: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503) at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:462) at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79) at com.android.builder.internal.aapt.v2.QueueableAapt2.lambda$makeValidatedPackage$1(QueueableAapt2.java:179) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more Caused by: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details at com.android.builder.png.AaptProcess$NotifierProcessOutput.handleOutput(AaptProcess.java:454) at com.android.builder.png.AaptProcess$NotifierProcessOutput.err(AaptProcess.java:411) at com.android.builder.png.AaptProcess$ProcessOutputFacade.err(AaptProcess.java:332) at com.android.utils.GrabProcessOutput$1.run(GrabProcessOutput.java:104) FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:processDebugResources'. > Failed to execute aapt * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. * Get more help at https://help.gradle.org BUILD FAILED in 1m 41s (node:8656) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. ``` while iam searching on it, i got clue that, because of incompatible of plugin version to my app it is failing, how can i use those plugins in my ionic 3 app, please sought me out of this problem **here is my ionic info** ``` cli packages: (C:\Users\midhun\AppData\Roaming\npm\node_modules) @ionic/cli-utils : 1.19.1 ionic (Ionic CLI) : 3.19.1 global packages: cordova (Cordova CLI) : 8.0.0 local packages: @ionic/app-scripts : 3.1.8 Cordova Platforms : android 7.0.0 Ionic Framework : ionic-angular 3.6.0 System: Node : v8.9.4 npm : 5.6.0 OS : Windows 7 Environment Variables: ANDROID_HOME : C:\Users\midhun\AppData\Local\Android\Sdk\platform-tools;C:\Users\midhun\AppData\Local\Android\Sdk\tools; Misc: backend : pro ``` **here is my cordova requirements** ``` Android Studio project detected Requirements check results for android: Java JDK: installed 1.8.0 Android SDK: installed true Android target: installed android-27,android-26,android-25,android-24,Google Inc.:Google APIs:24,android-23,Google Inc.:Google APIs:23,android-22,Google Inc.:Google APIs:22,android-21,Google Inc.:Google APIs:21,android-20,android-19,Google Inc.:Google APIs:19,android-18,Google Inc.:Google APIs:18,android-17,Google Inc.:Google APIs:17,android-16,Google Inc.:Google APIs:16,android-15,Google Inc.:Google APIs:15,android-14,Google Inc.:Google APIs:14 Gradle: installed C:\Program Files\Android\Android Studio\gradle\gradle-4.1\bin\gradle ```<issue_comment>username_1: try this command to build an apk for android: > > ionic cordova build android > > > for more information for this: <https://ionicframework.com/docs/intro/deploying/> Upvotes: 0 <issue_comment>username_2: You will need to change this file: platforms/android/project.properties Instead of "cordova.system.library.1:....v4.0" you should use "cordova.system.library.1=com.android.support:support-v4:27.1.0" Upvotes: 4 [selected_answer]<issue_comment>username_3: Late answer, but this cordova plugin will do the job for you. The accepted answer didn't help for me, since I'm using a build server. ``` https://github.com/dpa99c/cordova-android-support-gradle-release ``` Upvotes: 0
2018/03/14
564
1,739
<issue_start>username_0: I would like to select first occurrence of `li` from an `ul` which is not having particular class. Check below snippet for example, Here I wanted add class `red` to first `li` which is not having class `myClass`. Here am expecting `- Two` should add class `red`. ```js $(document).ready(function() { $('ul li:not(.myClass):first-child').addClass('red'); }); ``` ```css .red { color: red; } ``` ```html * One * Two * Three * Four * Five ``` **Update:** In same way I wanted to select second-child as well. Is it possible? Thanks in Advance.<issue_comment>username_1: Use [`:first`](https://api.jquery.com/first-selector/) instead of `:first-child` selector ```js $(document).ready(function() { $('ul li:not(.myClass):first').addClass('red'); }); ``` ```css .red { color: red; } ``` ```html * One * Two * Three * Four * Five ``` You can target element based on index using [`.eq(index)`](https://api.jquery.com/eq-selector/) ``` $('ul li:not(.myClass):eq(1)').addClass('red'); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You just need to use `:first` instead of using `:first-child`. So your selector should like `'ul li:not(.myClass):first'` ``` $(document).ready(function() { $('ul li:not(.myClass):first').addClass('red'); }); ``` **Eidted** After your comment, I would like to recommend jquery [eq()](https://api.jquery.com/eq/) function. By using eq() you can select any `li` by passing nth number of `li`. ``` $('ul li:not(.myClass):eq(1)').addClass('red'); ``` For `eq()` function first position of `li` is `0` Upvotes: 1 <issue_comment>username_3: try ``` $(document).ready(function() { $('ul li:first-child').addClass('red'); }); ``` Upvotes: 0
2018/03/14
986
3,645
<issue_start>username_0: Hello I am currently building a table that will allow multiple column types in it. I want to be able to use this like: ```html ``` `text-column` and `icon-column` are separate directives. I currently have an abstract class called `column` and lets say the `text-column` and the `icon-column` may look something like: ```ts export abstract class Column { @Input() someParameters:string; @Input() header:string; } export class TextColumnDirective extends Column { //I do cool stuff here } export class IconColumnDirective extends Column { //I do different cool stuff } ``` My table may look something like: ```ts @Component({ selector:'my-table', template: ` | {{column.header}} | | --- | ` }) export class MyTableComponent { @ContentChildren(Column) columns:QueryList; //I do cool stuff too } ``` So this approach works if I do not use an abstract and just call it with just text column like `@ContentChildren(TextColumnDirective) columns:QueryList;` but only gets the text column and the same with the icon column. How can I accomplish this where I can add different types of directives for different columnTypes later?<issue_comment>username_1: Ok, so in the comments above I was told to look at comment at a link they provided. That said, they we're correct in the answer I will post the code here for the next person. ```html ``` ``` @Directive({ selector: 'text-column', provider: [{provide:Column,useExisting:forwardRef(() => TextColumnDirective)}) }) export class TextColumnDirective extends Column { // ... } @Directive({ selector: 'icon-column', provider:[{provide:Column, useExisting:forwardRef(() => IconColumnDirective)}) }) export class IconColumnDirective extends Column { // ... } ``` Hope this helps next person. Upvotes: 2 [selected_answer]<issue_comment>username_2: The answer from @username_1 is correct but the `forwardRef(() => {})` is not required unless the type being provided is defined after the decorator or the decorated class [see this Angular In Depth post](https://indepth.dev/what-is-forwardref-in-angular-and-why-we-need-it/) Note this approach can be used for `ContentChildren` or `ViewChildren`, below I use `ViewChildren` **item.ts** ```js import { Directive } from '@angular/core'; export class Item { color = ''; } @Directive({ selector: '[blueItem]', providers: [{ provide: Item, useExisting: BlueItemDirective }], }) export class BlueItemDirective { // 'extends Item' is optional color = 'blue'; } @Directive({ selector: '[redItem]', providers: [{ provide: Item, useExisting: RedItemDirective }], }) export class RedItemDirective { // 'extends Item' is optional color = 'red'; } ``` **app.component.ts** ```js import { Component, ViewChildren, QueryList } from '@angular/core'; import { Item } from './item'; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ] }) export class AppComponent { name = 'Multiple View Child Types'; // Note we query for 'Item' here and not `RedItemDirective' // or 'BlueItemDirective' but this query selects both types @ViewChildren(Item) viewItems: QueryList; itemColors: string[] = []; ngAfterViewInit() { this.itemColors = this.viewItems.map(item => item.color); } } ``` **app.component.html** ```html Item Item Item Item Item Colors of the above directives ------------------------------ * {{color}} ``` Here is a [StackBlitz](https://stackblitz.com/edit/angular-multiple-view-child-types) showing this behavior in action. Upvotes: 4
2018/03/14
823
2,899
<issue_start>username_0: [The Algorithm](https://i.stack.imgur.com/gUwqZ.png) [This question (1-b)](https://i.stack.imgur.com/n3W4W.png) is asking for the number of comparisons made by the algorithm above in the average case, given the probability of successful search, which is p (0<=p<=1). All I understand is this: in the worst case scenario, the algorithm would make n+1 comparisons. I don't understand the solution below. [The solution](https://i.stack.imgur.com/muNv5.png)<issue_comment>username_1: Ok, so in the comments above I was told to look at comment at a link they provided. That said, they we're correct in the answer I will post the code here for the next person. ```html ``` ``` @Directive({ selector: 'text-column', provider: [{provide:Column,useExisting:forwardRef(() => TextColumnDirective)}) }) export class TextColumnDirective extends Column { // ... } @Directive({ selector: 'icon-column', provider:[{provide:Column, useExisting:forwardRef(() => IconColumnDirective)}) }) export class IconColumnDirective extends Column { // ... } ``` Hope this helps next person. Upvotes: 2 [selected_answer]<issue_comment>username_2: The answer from @username_1 is correct but the `forwardRef(() => {})` is not required unless the type being provided is defined after the decorator or the decorated class [see this Angular In Depth post](https://indepth.dev/what-is-forwardref-in-angular-and-why-we-need-it/) Note this approach can be used for `ContentChildren` or `ViewChildren`, below I use `ViewChildren` **item.ts** ```js import { Directive } from '@angular/core'; export class Item { color = ''; } @Directive({ selector: '[blueItem]', providers: [{ provide: Item, useExisting: BlueItemDirective }], }) export class BlueItemDirective { // 'extends Item' is optional color = 'blue'; } @Directive({ selector: '[redItem]', providers: [{ provide: Item, useExisting: RedItemDirective }], }) export class RedItemDirective { // 'extends Item' is optional color = 'red'; } ``` **app.component.ts** ```js import { Component, ViewChildren, QueryList } from '@angular/core'; import { Item } from './item'; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ] }) export class AppComponent { name = 'Multiple View Child Types'; // Note we query for 'Item' here and not `RedItemDirective' // or 'BlueItemDirective' but this query selects both types @ViewChildren(Item) viewItems: QueryList; itemColors: string[] = []; ngAfterViewInit() { this.itemColors = this.viewItems.map(item => item.color); } } ``` **app.component.html** ```html Item Item Item Item Item Colors of the above directives ------------------------------ * {{color}} ``` Here is a [StackBlitz](https://stackblitz.com/edit/angular-multiple-view-child-types) showing this behavior in action. Upvotes: 4
2018/03/14
638
2,229
<issue_start>username_0: Let's say I have method like this: ``` public int toTest() { try { Thread.sleep(60 * 1_000); } catch (InterruptedException ignored) {} return 8; } ``` And I would like to test it e.g. check if returned value is correct, like this: ``` @Test public void test() { int actual = toTest(); assertThat(actual).isEqualTo(8); } ``` Is there any way to "simulate" time lapse so during test execution I will not be force to wait for whole minute? Edit: Probably I described my question too concrete. I didn't want to focus on this exact one minute but on way to bypass it. There could be even 100 days but my question is if there is method to simulate this time lapse. Like in project reactor methods with are using virtual time <https://projectreactor.io/docs/test/snapshot/api/reactor/test/StepVerifier.html#withVirtualTime-java.util.function.Supplier-><issue_comment>username_1: JUnit test the method as is (unless you add mocking..) if you want you can test internal method as `toTestInternal`: ``` public int toTest() { try { Thread.sleep(60 * 1_000); } catch (InterruptedException ignored) {} return toTestInternal(); } public int toTestInternal() { return 8; } ``` and test the method you want (`toTestInternal`): ``` @Test public void test() { int actual = toTestInternal(); assertThat(actual).isEqualTo(8); } ``` Upvotes: 0 <issue_comment>username_2: You can achieve that using Powermock. ``` // This will mock sleep method PowerMock.mockStatic(Thread.class, methods(Thread.class, "sleep")); PowerMockito.doThrow(new InterruptedException()).when(Thread.class); Thread.sleep(Mockito.anyLong()); ``` At the start of class, you will need to add this ``` @PrepareForTest(YourClassToWhich_ToTest_MethodBelong.class) ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: I would suggest to make the interval a dynamic parameter. It will save your time: ``` public int toTest(int interval) { try { Thread.sleep(interval); }catch (InterruptedException ignored) {} return 8; } ``` and the test class to be like: ``` @Test public void test() { int actual = toTest(60); assertThat(actual).isEqualTo(8); } ``` Upvotes: 0