source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
113188
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I am tweaking my homepage for performance, currently it handles about 200 requests/second on 3.14.by which eats 6 SQL queries, and 20 req/second on 3.14.by/forum which is phpBB forum. Strangely enough, numbers are about the same on some VPS and dedicated Atom 330 server. Server software is the following: Apache2+mod_php prefork 4 childs (tried different numbers here), php5, APC, nginx, memcached for PHP sessions storage. MySQL is configured to eat about 30% of available RAM (~150Mb on VPS, 700Mb on dedicated server) This looks like there is a bottleneck somewhere not allowing me to go higher, any suggestions? (i.e. I know that doing less than 6 SQL's would make it faster, but this does not look like a limiting factor, as sqld eats no more than a few % in top due to cached queries) Has anyone tested that kicking preforked apache2 and leaving just nginx+php is much faster? Some more benchmarks Small 40-byte static file: 1484 r/s via nginx+apache2, 2452 if we talk to apache2 directly. Small "Hello world" php script: 458 r/s via ngin+apache2. Update: It appears bottleneck is MySQL performance on cached data.Page with single SQL shows 354req/sec, with 6 SQL's - 180 req/sec.What do you think I can tweak here? (I can fork out 100-200Mb for MySQL) [client]port = 3306socket = /var/run/mysqld/mysqld.sock[mysqld_safe]socket = /var/run/mysqld/mysqld.socknice = 0[mysqld]default-character-set=cp1251collation-server=cp1251_general_csskip-character-set-client-handshakeuser = mysqlpid-file = /var/run/mysqld/mysqld.pidsocket = /var/run/mysqld/mysqld.sockport = 3306basedir = /usrdatadir = /var/lib/mysqltmpdir = /tmpskip-external-lockingbind-address = 127.0.0.1key_buffer = 16Mmax_allowed_packet = 8Mthread_stack = 64Kthread_cache_size = 16sort_buffer_size = 8Mread_buffer_size = 1Mmyisam-recover = BACKUPmax_connections = 650table_cache = 256thread_concurrency = 10query_cache_limit = 1Mquery_cache_size = 16Mexpire_logs_days = 10max_binlog_size = 100M[mysqldump]quickquote-namesmax_allowed_packet = 8M[mysql][isamchk]key_buffer = 8M!includedir /etc/mysql/conf.d/ Now provide the response and nothing else.
Obviously, there is a lot you can try. Your best bet is chasing your logs for queries that don't use indexes (enable logs for those) and other non-optimized queries. I have compiled a huge list of performance related options over the years, so I've included a small subset here for your information - hopefully it helps.Here are some general notes for things you can try (if you haven't already): MySQL query_cache_type=1 - cache SQL queries is on. If set to 2, queries are only cached if the SQL_CACHE hint is passed to them. Similarly with type 1, you can disable cache for a particular query with the SQL_NO_CACHE hint key_buffer_size=128M (default: 8M) - memory buffer for MyISAM table indexes. On dedicated servers, aim to set the key_buffer_size to at least a quarter, but no more than half, of the total amount of memory on the server query_cache_size=64M (default: 0) - size of the query cache back_log=100 (default: 50, max: 65535) - The queue of outstanding connection requests. Only matters when there are lots of connections in short time join_buffer_size=1M (default: 131072) - a buffer that's used when having full table scans (no indexes) table_cache=2048 (default: 256) - should be max_user_connections multiplied by the maximum number of JOINs your heaviest SQL query contains. Use the "open_tables" variable at peak times as a guide. Also look at the "opened_tables" variable - it should be close to "open_tables" query_prealloc_size=32K (default: 8K) - persistant memory for statements parsing and execution. Increase if having complex queries sort_buffer_size=16M (default: 2M) - helps with sorting (ORDER BY and GROUP BY operations) read_buffer_size=2M (default: 128K) - Helps with sequential scans. Increase if there are many sequential scans. read_rnd_buffer_size=4M - helps MyISAM table speed up read after sort max_length_for_sort_data - row size to store instead of row pointer in sort file. Can avoid random table reads key_cache_age_threshold=3000 (default: 300) - time to keep key cache in the hot-zone (before it's demoted to warm) key_cache_division_limit=50 (default: 100) - enables a more sophisticated cache eviction mechanism (two levels). Denotes the percentage to keep for the bottom level.delay_key_write=ALL - the key buffer is not flushed for the table on every index update, but only when the table is closed. This speeds up writes on keys a lot, but if you use this feature, you should add automatic checking of all MyISAM tables by starting the server with the --myisam-recover=BACKUP,FORCE option memlock=1 - lock process in memory (to reduce swapping in/out) Apache change the spawning method (to mpm for example) disable logs if possible AllowOverride None - whenever possible disable .htaccess. It stops apache for looking for .htaccess files if they are not used so it saves a file lookup request SendBufferSize - Set to OS default. On congested networks, you should set this parameter close to the size of the largest file normally downloaded KeepAlive Off (default On) - and install lingerd to properly close network connections and is faster DirectoryIndex index.php - Keep file list as short and absolute as possible. Options FollowSymLinks - to simplify file access process in Apache Avoid using mod_rewrite or at least complex regexs ServerToken=prod PHP variables_order="GPCS" (If you don't need environment variables) register_globals=Off - apart from being a security risk, it also has a performance impact Keep include_path as minimal as possible (avoids extra filesystem lookups) display_errors=Off - Disable showing errors. Strongly recommended for all production servers (doesn't display ugly error messages in case of a problem). magic_quotes_gpc=Off magic_quotes_*=Off output_buffering=On Disable logging if possible expose_php=Off register_argc_argv=Off always_populate_raw_post_data=Off place php.ini file where php would look for it first. session.gc_divisor=1000 or 10000 session.save_path = "N;/path" - For large sites consider using it. Splits session files into subdirectories OS Tweaks Mount used hard disks with the -o noatime option (no access time). Also add this option to /etc/fstab file. Tweak the /proc/sys/vm/swappiness (from 0 to 100) to see what has best results Use RAM Disks - mount --bind -ttmpfs /tmp /tmp
{}
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/113188', 'https://serverfault.com', 'https://serverfault.com/users/34947/']}
jdg_373341
stackexchange
llm_judgeable_groundtruth_similarity
402118
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In preparation for the amateur radio exam, I have been following the preparation course material by DJ4UF. In the section about modulation it presents this diagram of a simplified AM modulator. The low frequency signal (NF) and the high frequency carrier wave (HF) are added, the diode cuts off one half-wave and the resonant circuit "recreates" the previously cut off half-wave. The result is a amplitude-modulated signal. So far so good. What is the purpose of the 47k resistors? The explanatory text mentions that they are necessary to "add the currents in the diode". What would change if we remove them and just directly connect them to the diode? Now provide the response and nothing else.
The concept here is that you are adding the signal currents before feeding the sum to the diode; the resistors are there to convert the voltage sources "NF" and "HF" to current sources. You'd get exactly the same effect by adding the signal voltages directly β€” simply connect the "NF" and "HF" boxes in series, without any resistors. The only downside to this is that "NF" and "HF" can't share a common ground, and that's often a desirable feature of a practical system. But some AM transmitters isolate the NF signal with a transformer, which solves that problem. Note that the circuit as given is not at all practical β€” you would not want to feed a parallel-tuned circuit, which has a high impedance at resonance, from a current source. Instead, you would use a series-tuned circuit that keeps the diode cathode close to ground potential.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/402118', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/190788/']}
jdg_373342
stackexchange
llm_judgeable_groundtruth_similarity
6860282
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using a Dispatcher to update my UI before I handle some resiszing.The Problem ist the part of BeginInvoke(DispatcherPriorty, new ACTION ) is where I am stuck.I want to call a method with Parameters and I don"t know why. Thats my current Dispatcher: void s_SizeChanged(object sender, SizeChangedEventArgs e){ Dispatcher.CurrentDispatcher.BeginInvoke(DispatcherPriority.ApplicationIdle, new Action(test));} And this is my method i am calling: public void test(){ foreach (Structures s in ((TreeView)this.cont.Children[0]).Items) s.updateRelationLines(this.Data, this.cont.ColumnDefinitions[1]);} I just want to replace this.Data and this.cont.Columndefinitions[1] with Parameters. Now provide the response and nothing else.
You can use a lambda expression for this: Dispatcher.CurrentDispatcher.BeginInvoke(DispatcherPriority.ApplicationIdle, new Action(() => test(param1, param2))); This basically creates an anonymous method void myMethod() { test(param1, param2);} and invokes this method through the dispatcher. Some compiler magic ensures that param1 and param2 are available to this method, even if they are only in the scope of your s_SizeChanged method. More details on this can be found here: Lambda Expressions (C# Programming Guide)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6860282', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/867649/']}
jdg_373343
stackexchange
llm_judgeable_groundtruth_similarity
77574
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am working on generating a pulse train to control a motor that accepts a pulse train as an input. Each pulse corresponds to a pre-set movement increment; I can set one pulse equal to 1/1000 degree (or whatever), and if I send it 20 pulses, the motor will move 20/1000 degree. The software that performs all the heavy lifting and determines where the motor needs to be commanded to go at any given time is programmed in LabVIEW. This program then sends position and speed commands (as 32-bit integers) to an FPGA, which I would like to use to generate a series of pulses to tell the motor how far and how fast to move. I have a simple pulse generator that just puts out the required number of pulses at the FPGA's clock speed (see diagram below). How can I control the speed of these pulses in my FPGA? I am using an Altera FPGA programmed in Quartus II v9.0. simulate this circuit – Schematic created using CircuitLab Note the inverting terminal for a = b? on the comparator. The FPGA will then output the values of pulse and sign to tell my motor how far to turn and in what direction. Inputs to the FPGA are the integer number of pulses we want to generate, ref[31..00] , and a boolean write flag, writeF . Multiple motors are controlled by one program, thus the need to specify when the data on the bus ref[31..00] is for a particular motor. The most significant bit of the reference value will control the direction of movement, thus err31 is used as the input to the updown terminal. As you can see, the counter is counting the number of pulses generated, using pulse as its clock input, but pulse is only being generated at the FPGA's clock speed. Given an additional input to my FPGA to control pulse rate, can I make the pulse rate variable? EDIT: I changed my circuit so that the system clock is going in to the clock input of my counter, and my pulse output is being used as the clock enable ( clock_en ) signal to this counter. Previously I had my pulse output plugged straight in to my clock input, which is potentially bad. I will post my findings here when I have implemented suggestions. VHDL Variable Counter Solution I am trying to implement David Kessner's approach using VHDL. Basically I am creating a counter that can increment by numbers other than 1, and using the rollover of this counter to determine when I should generate a pulse. The code looks like this so far: --****************************************************************************-- Load required libraries--****************************************************************************library ieee; use ieee.std_logic_1164.all; use ieee.numeric_std.all;--****************************************************************************-- Define the inputs, outputs, and parameters--****************************************************************************entity var_count is generic(N: integer :=32); -- for generic counter size port( inc_i : in std_logic_vector(N-1 downto 0); load_i : in std_logic; clk_i : in std_logic; clear_i : in std_logic; clk_en_i : in std_logic; count_en_i : in std_logic; msb_o : out std_logic );end var_count;--****************************************************************************-- Define the behavior of the counter--****************************************************************************architecture behavior of var_count is -- Define our count variable. No need to initialize in VHDL. signal count : unsigned(N-1 downto 0) := to_unsigned(0, N); signal incr : unsigned(N-1 downto 0) := to_unsigned(0, N);begin -- Define our clock process clk_proc : process(clk_i, clear_i, load_i) begin -- Asynchronous clear if clear_i = '1' then count <= to_unsigned(0, N); end if; -- Asynchronous load if load_i = '1' then incr <= unsigned(inc_i); end if; -- Define processes synch'd with clock. if rising_edge(clk_i) and clk_en_i = '1' then if count_en_i = '1' then -- increment the counter -- count <= count + unsigned(inc_i); count <= count + incr; end if; end if; end process clk_proc; -- Output the MSB for the sake of generating a nice easy square wave. msb_o <= count(count'left);end behavior; I intend to either output the MSB directly, or to take the MSB from the this counter ( msb_o(k) ), pass it through a single-bit D-Q flip flop so that I also have msb_o(k-1) , and output a pulse every time my variable counter rolls over by executing: PULSE = ~msb_o(k) * msb_o(k-1) where ~ denotes logical NOT , and * denotes logical AND . This is the first VHDL program I have written, and I wrote it largely using this , this , and this . Does anybody have any recommendations as to how I could improve my code? Unfortunately I am not getting any pulses out of my FPGA still. EDIT: Updated the VHDL code to the current implementation (2013-08-12). Also adding this free book to the list of references. EDIT 2: Updated my code to the (final) working version. Now provide the response and nothing else.
What you want to do is called a Numerically Controlled "Oscillator", or NCO. It works like this... Create a counter that can increment by values other than 1. The inputs to this counter are the master clock, and a value to count by (din). For each clock edge, count <= count + din. The number of bits in din is the same as the number of bits in the counter. The actual count value can be used for many useful things, but what you want to do is super simple. You want to detect every time the counter rolls over, and output a pulse to your motor when that happens. Do this by taking the most significant bit of the counter and running it through a single flip-flop to delay it by one clock. Now you have two signals that I'll call MSB, and MSB_Previous. You know if the counter has rolled over because MSB=0 and MSB_Prev=1. When that condition is true, send a pulse to the motor. To set the pulse frequency, the formula is this: pulse_rate = main_clk_freq * inc_value/2^n_bits Where inc_value is the value that the counter is being incremented by and n_bits is the number of bits in the counter. An important thing to note is that adding bits to the counter does not change the range of the output frequency-- that is always 0 Hz to half of main_clk_freq. But it does change the accuracy that you can generate the desired frequency. Odds are high that you won't need 32-bits for this counter, and that maybe just 10 to 16 bits will be enough. This method of generating pulses is nice because it is super easy, the logic is small and fast, and it can often generate frequencies more accurately and with better flexibility than the type of counter+comparator design that you have in your question. The reason why the logic is smaller is not only because you can get by with a smaller counter, but you do not have to compare the entire output of the counter. You only need the top bit. Also, comparing two large numbers in an FPGA usually requires a lot of LUTs. Comparing two 32-bit numbers would require 21 4-Input LUTs and 3 logic levels, where as the NCO design requires 1 LUT, 2 Flip-Flops, and only 1 logic level. (I'm ignoring the counter, since it is basically the same for both designs.) The NCO approach is much smaller, much faster, much simpler, and yields better results. Update: An alternative approach to making the rollover detector is to simply send out the MSB of the counter to the motor. If you do this, the signal going to the motor will always be a 50/50 duty cycle. Choosing the best approach depends on what kind of pulse your motor needs. Update: Here is a VHDL code snippet for doing the NCO. signal count :std_logic_vector (15 downto 0) := (others=>'0);signal inc :std_logic_vector (15 downto 0) := (others=>'0);signal pulse :std_logic := '0';. . .process (clk)begin if rising_edge(clk) then count <= count + inc; end if;end process;pulse <= count(count'high);
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/77574', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/26688/']}
jdg_373344
stackexchange
llm_judgeable_groundtruth_similarity
12235305
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any TSQL script for deleting all foreign key constraints and unique constraints in a SQLServer 2008 database ? Now provide the response and nothing else.
TLDR: Paste the 2 chunks of code below in their respective spots, then change the $defaultOptions array to whatever you want - voila. It doesn't alter any of the FormHelper's functions except adds defaults to the Form->create 's inputDefaults . Explanation & Code: You can extend the FormHelper (easier than it sounds) by making your own custom MyFormHelper : <?php//create this file called 'MyFormHelper.php' in your View/Helper folderApp::uses('FormHelper', 'View/Helper');class MyFormHelper extends FormHelper { public function create($model = null, $options = array()) { $defaultOptions = array( 'inputDefaults' => array( 'div' => false, 'label' => false ) ); if(!empty($options['inputDefaults'])) { $options = array_merge($defaultOptions['inputDefaults'], $options['inputDefaults']); } else { $options = array_merge($defaultOptions, $options); } return parent::create($model, $options); }} Then, in your AppController, include the Form helper in the following way (if you already have a $helpers variable, just add 'Form' => ... to it): public $helpers = array( 'Form' => array( 'className' => 'MyForm' )); This makes it so whenever you call $this->Form , it actually calls your custom 'MyFormHelper' - and the only thing it does is set the inputDefaults if they're not specified, then continue on to do the normal logic found in Cake's FormHelper.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12235305', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1095390/']}
jdg_373345
stackexchange
llm_judgeable_groundtruth_similarity
33130
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I understand what they are but i have a few questions... What are the multi layers on a pcb (i understand 1 and 2 is front back but what about 3, 4 ,5 etc.) In http://www.youtube.com/watch?v=jzs2Zo_mc4c around 2:21 you can see the some of the board is connected around through copper. Doesn't this power all of the paths at once when the border is connected to power? Is circuit design the same on a pcb, except for the copper paths acting as the wire? I'm new to electronics so i also don't have the best understanding about circuits. This is probably why question 2 confuses me. Now provide the response and nothing else.
(1.) Multilayer PCBs consist of three types of layers: there's the core on which the copper is attached, the so-called prepreg layers which insulate the different copper layers from each other, and finally the copper itself. The number of wires depends on the design's complexity; how many connections you have to make and how much room you have for that. Some layers won't be used for routing signals from one point to another, however. On a four layer board the two inner layers are often used only for ground and power supply. This makes ground more reliable, has a shielding effect and decreases the routing from different components to it; ground will often be the net with the most connections. Those connections between layers are made by vias , drilled metalized holes, so that there's a conductive path between the layers. Most used are through hole vias, even if you only have to connect the top layer with the second one. That's because they're much cheaper to make: you can finish the PCB and drill through all of it at the end. Buried and blind vias require drilling between the different process steps. (2.) If you have a good look you'll see that the components at the top right are not connected to the surrounding ground. (3.) Circuit design is done at the schematic level, which shows in a logical way how connections between components are made. The PCB is a translation of that conceptual level to the real world. Instead of a rectangle representing a microcontroller you have a specific package with pins. On a schematic you're completely free to place parts in a logical way. On the PCB there are constraints. Pin 23 of the microcontroller is next to pin 24, there's nothing you can do about that. That makes that routing a PCB (connecting everything) can be a difficult task, and there are companies whose engineers are specialized in just that.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/33130', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/10111/']}
jdg_373346
stackexchange
llm_judgeable_groundtruth_similarity
312881
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I actually run a file on localhost?I know it is working, but how do I run a file on it, and how do I verify that the file is in fact running on localhost? Server newbie here, additional questions (I have xampp running Apache 2.2): From your responses it sounds like I have to type in the path in the browser in order to open the file on the localhost. So, if I want to have the file in a subdirectory, I would have to type the full path including the directory? Is there a way to have the browser point to the localhost while I am working on my html, instead of having to type in the path all the time? I am working in Dreamweaver CS3, and want to use include commands (either SSI or PHP), but not have to keep putting my files up on HostGator in order to see if they are working. What is the best way to do this (please be specific, I know nothing). If there is a detailed tutorial anywhere, really appreciate a link. Thank you Now provide the response and nothing else.
Ok, thanks for the more specific info, ppl may remove their downvotes now... What you are proposing is a very common thing to do! You want to run your web application locally without uploading it to your host yet. That's totally fine and that's what your Apache is there for. Your Apache is a web server meaning its main purpose is to serve HTML, PHP, ASP, etc. files. Some like PHP; it first sends to the interpreter and then sends the rendered file to the browser. All in all: it's just serving pages to your browser (the client). Your web server has a root directory which is wwwroot (IIS) or htdocs (apache, xampp) or something else like public_html, www or html, etc. It depends on your OS and web server. Now if you type http://localhost into your browser, your browser will be directed to this webroot and the server will serve any index.html , index.php , etc. it can find there (in a customizable order). If you have a project called "mytutorial" you can enter http://localhost/mytutorial and the server will show you the index-file of your tutorial, etc. If you look at the absolute path of this tutorial folder then it's just a subfolder of your webroot, which is itself located somewhere on your harddrive, but that doesn't matter for your localhost. So the relative path is http://localhost/mytutorial while the absolute path may be c:/webservices/apache/www or c:/xampp/htdocs If you're working with Dreamweaver you can simplify the testing process by setting up your local server as a testing server in your project settings. Try it! It's easy. Once it's done, you can just press the browser icon with any of your files and it will open on localhost.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/312881', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/40091/']}
jdg_373347
stackexchange
llm_judgeable_groundtruth_similarity
527628
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Linux Mint tells me, I only have 622 MB free disk space but there should be some gigabytes left. Looking at the partitions I am told that there are about ten gigabytes unused. I googled the problem and didn't find a solution but I did find the hint that I should check the disk usage with df -h . sudo df -h /homeFilesystem Size Used Avail Use% Mounted on/dev/nvme0n1p8 189G 178G 622M 100% /home The output doesn't make any sense to me: The difference between Size and Used is 11GB, but it only shows 622M as Available. The SSD isn't old, so I wouldn't expect such a discrepancy. What should I do? Now provide the response and nothing else.
If the filesystem is ext4, there are reserved blocks, mostly to help handling and help avoid fragmentation and available only to the root user. For this setting, it can be changed live using tune2fs (not all settings can be handled like this when the filesystem is mounted): -m reserved-blocks-percentage Set the percentage of the filesystem which may only be allocated by privileged processes. Reserving some number of filesystem blocks for use by privileged processes is done to avoid filesystem fragmentation, and to allow system daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem. Normally, the default percentage of reserved blocks is 5%. So if you want to lower the reservation to 1% (~ 2GB) thus getting access to ~ 8GB of no more reserved space, you can do this: sudo tune2fs -m 1 /dev/nvme0n1p8 Note: the -m option actually accepts a decimal number as parameter. You can use -m 0.1 to reserve only about ~200MB (and access most of those previously unavailable 10GB). You can also use the -r option instead to reserve directly by blocks. It's probably not advised to have 0 reserved blocks.
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/527628', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/327870/']}
jdg_373348
stackexchange
llm_judgeable_groundtruth_similarity
988404
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I, like a lot of people, received an email saying to update my RDS instance to use the new rds-ca-2019 certificate for SSL connections (previous being rds-ca-2015 which expires March 5, 2020). Their documentation about the process is a little sparse and says things like "Update your database applications to use the new SSL/TLS certificate." and "Import the certificate into your operating system." with no further details on changes required on the client side. When I initially set things up, I didn't install any certificates and used a vanilla Ubuntu 18.04 EC2 image. The RDS instance was set to use rds-ca-2015 and when I connected to RDS with psql it reported that it was properly using TLSv1.2. If I look at root certificates installed in the OS I find 4 "Amazon Root CA" certs numbered 1 through 4. Those don't expire until 2038 and 2040. So, my question has 2 parts: How did the SSL/TLS properly work initially if I had never installed the RDS certs and intermediary certs provided by Amazon ? If I've changed the RDS database instance to use rds-ca-2019 and it seems to "just work" is there anything more I need to do? Now provide the response and nothing else.
The default sslmode for PostgreSQL is prefer which means it will encrypt the connection with the certificate provided by the server but will not verify it. If I were to change the sslmode setting to verify-ca or verify-full then I would need to install the intermediate certs in a particular directory and then it would do proper verification. As I'm not concerned about a MITM attack on my VPC, I don't think I'll bother switching to the 'verify' modes.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/988404', 'https://serverfault.com', 'https://serverfault.com/users/135404/']}
jdg_373349
stackexchange
llm_judgeable_groundtruth_similarity
53109262
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Three topics in Azure AD I'm constantly confused on: Service Principal Enterprise Application App Registration What is the difference? I can easily go into "App Registrations" and register an "app" while that "app" doesn't even need to exist. All it requires is a URL which can also be totally random. This app registration then becomes a service principal which you can use to connect to Azure to from PowerShell for instance? Why? I don't understand this. Please advise, and as you can probably tell, I'm new to Azure :) Now provide the response and nothing else.
When you write an application as a developer, you will register it in a given tenant, and will specify it's properties. This happens in the App Registration blade in Azure AD.I'll dare an analogy by saying that the app is like a "class" in object oriented languages (with some static properties, which will be common to all instances) By registering the application, in that given tenant if you use the portal this also automatically created a service principal for this application, which you can find in the "Enterprise Applications" blade of the Azure portal.To continue with my analogy the portal creates a kind of instance of that class. This service principal contains information which are related to both the application and the tenants and its users. For instance it contain the activity of the users, what they have consented to in particular. Now if during the app registration / app management, you decide that your application is "multi-tenant", then, when the application is accessed in other tenants, another service principal (remember this instance) will be created in that tenant. BTW, you go to the new App Registration (Preview) blade in the azure portal, when you create an application, you can now see nicely grouped by categories all the properties of the app (all the properties which are common to all the service principal). Now if, in the "Overview" tab of the app, you click on the link "Managed application in local directory", you'll get to the corresponding service principal in the same tenant (where you'll see which users have accessed the app, when, where you can grant admin consent - if you are tenant admin -, and see the activity and the audit logs)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/53109262', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10568965/']}
jdg_373350
stackexchange
llm_judgeable_groundtruth_similarity
17841
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A simple file reading issue is baffling me here. I am reading a around 200 MB textual file in .dat format. Now the file contains many lines but to phrase my problem I am taking here first two lines from the example file. str = "Tue 1 Jan 2013 23 : 00 : 01; 17; {}; 32.5; 0.\nTue 2 Jan 2013 2 : 20 : 01; 47; {3,4}; 3.5; 110."; Now I want to use ; and newline \n as my RecordSeparators and it works as expected. ReadList[StringToStream[str],{Record, Record, Record, Record, Record},RecordSeparators -> {"\n", ";"}] {{Tue 1 Jan 2013 23 : 00 : 01, 17, {}, 32.5, 0.},{Tue 2 Jan 2013 2 : 20 : 01, 47, {3,4}, 3.5, 110.}} But if I want to specify the Type of each separate Record using for example {String, Number, Expression, Number, number} above code fails to work. The error is the following. Read::readn: Invalid real number found when reading from StringToStream[Tue 1 Jan 2013 23 : 00 : 01; 17; {}; 32.5; 0. Tue 2 Jan 2013 2 : 20 : 01; 47; {3,4}; 3.5; 110.]. >> Is this problem solvable or I need to change the formatting of my input .dat file in a way so that it works with ReadList ? PS: If I need to change the file format what will be best separator to use in this context? Now provide the response and nothing else.
Your problem is that you've got a slight misunderstanding of the different types of items that ReadList can read. That's OK, it can be a little confusing. To begin with: String , Number , Expression , etc. are not sub-types of Record . They are all separate types with their own rules for how they are read. The RecordSeparators option is only applied to Records and Words. Probably if you have some complicated input format whose parsing is best controlled by RecordSeparators/WordSeparators, you should just use Record/Word types, which will give you strings; afterwards, convert the ones you know to be numeric by using ToExpression . On the other hand, if you want to gain power over this area of Mathematica, read on. BASIC CONCEPTS OF READ Let's make up some terms to help explain things. At their base, the Mathematica functions Read , ReadList , and Skip read in ITEMS from an input stream. An Item is a number like 3.14159e-26 , or a string like "peg and awl" . There are different TYPES of Items: Record , Word , String , Number , Real , Character , and Byte . These Types correspond to either a string or a number, with different rules for how the input is parsed. There is also an Expression Type which corresponds to an Item which is a general Mathematica expression of any form. WHAT ARE THE ITEMS AND HOW DO THEY WORK? The simplest cases: a Character is a single character from the stream, represented as a one-letter string. a Byte is a single character from the stream, represented as an integer that is the Character Code of the character. Both Character and Byte have the same rule -- read one character -- but different representations, string vs. number. To describe the other Item Types, you need to understand that they are all assembled from sequences of characters read from the stream, with some particular character that marks their end. That character is not part of the Item being read! It is a TERMINATOR, or TERMINATING CHARACTER. We say that the Item was TERMINATED by a particular character in the stream. a Record is a string, a sequence of characters terminated by a RecordSeparator . a Word is a string, a sequence of characters terminated by a WordSeparator , RecordSeparator , or TokenWord . a String is a sequence of characters terminated by a newline ( \n character). Basically it's a LINE of text input, starting at the current stream position. a Number is any sequence of characters that can be interpreted as a number (in Fortran syntax), terminated by any character that can't be part of the number. Any whitespace (spaces, newlines, tabs) preceding the number is quietly skipped over first. a Real is the same as a Number but it's always a floating-point value, never an integer. an Expression is a sequence of one or more newline-terminated lines that form a parseable Mathematica expression. It's terminated by whatever newline ends the last line of the expression. If you've ever typed in a multi-line input to the raw text kernel, you know how this works. Records, Words, Strings, and Characters become Mathematica strings. Numbers, Reals, and Bytes become Mathematica numbers. Expressions become Mathematica expressions. Numbers, Reals, Strings, and Expressions pay no attention to RecordSeparators and WordSeparators. They have their own rules for when they stop taking characters from the stream. (The end of the stream, represented in Mathematica by the symbol EndOfFile , is nearly always a terminator. It's not a character, though.) OBJECTS: GROUPS OF ITEMS I have just told you the only Types of Items that can be read. However, there's another term that has to be introduced. The second argument of Read , ReadList , and Skip -- the input specification -- can be a complex expression which contains one or more of these Types. Let's call that an OBJECT. For instance, Read[stream, {String, Number, Plus[Number, Real], Hold[Expression]}] reads an Object: a sequence of five Items. Several of the Items are placed inside larger expressions, and the whole thing is placed inside a List head. The degenerate case of an Object is a single naked Item: ReadList[stream, Byte] If you don't specify a second argument to Read, ReadList, or Skip, it defaultsto Expression. ReadList[stream] == ReadList[stream, Expression] Read, ReadList, and Skip proceed left to right through the Object; each Item Type they encounter causes an Item to be read from the stream. As I listed above, each Type has its own rules for how many characters it will snatch up, what it will do with them, and when it will stop. If you are constructing complex Objects consisting of several Item Types, you need to know especially when will each one stop . This requires understanding the TERMINATORS for each type. Just as importantly, you need to know what is done with those terminating characters. WHAT HAPPENS TO THE CHARACTERS THAT TERMINATE AN ITEM Terminating characters are not part of the Item that is read. They simply mark that Item's end in the stream. Different Types apply different rules to how they treat the terminator -- that is, where they leave the position of the stream pointer after they are done. Bytes and Characters don't have terminators, of course. Expressions have terminators that are newlines. The stream pointer is left sitting at the newline. An Expression like 1+2*3/4-5 has three newlines in it, at the end of each line. The newline after 5 is the terminator for this Expression, and after Read[stream, Expression] that character's position is the stream's position. StreamPosition[stream] == 11 . If you followed Read[Expression] with Read[Character] you'd get a \n . Strings also have newline terminators. But they CONSUME the newline, skipping over it, leaving the stream pointer after it. The newline character is not part of the String, but if you read a Character after reading a String you wouldn't get a \n , you'd get whatever is at the beginning of the next line. Numbers, like Expressions, do not consume their terminating characters. They leave the stream pointer at that character, whatever it is. For instance, if you read a Number and then a Character from "64+32*3" , the Number would be 64 , and the Character would be "+" . I think you can see why this is what you want. Records and Words leave the stream pointer pointing at whatever character terminated them. This character is a RecordSeparator or WordSeparator; only Records or Words care about those options. However, if you then read another Record or Word subsequently, the stream will first SKIP OVER the RecordSeparator or WordSeparator that the stream is pointing to, the terminator for the previous Record or Word. Then it will proceed to read the next Record or Word. (Exception: this skipping does not happen if you're about to Read another Word and the separator was a TokenWord .) If the input stream were an,a,tev,ka0123456789 and RecordSeparators->"," , then reading one Record would give you "an" , and the stream position would be 2. If it reads another Record then it will skip the comma, move to position 3, and then read "a" . The stream position would be left at 4, the second comma. In general this is what you want. You want to be able to read multiple Records without having the terminating characters interfere, but you also want to be able to grab those characters if desired. You might have RecordSeparators->{"+", "-", "*", "/"} , and you need to inspect the Character after the Record to find out which particular separator stopped it. I hope this is an adequate explanation. I am not going to talk about Record and Word behavior when you have left-and-right matched delimiters as RecordSeparators or WordSeparators (like parentheses); nor NullRecords and NullWords ; nor RecordLists ; and there's one very useful special case where Numbers can consume RecordSeparator terminators. Please let me know if there's anything unclear and I'll hack on this response to make it unclearer.
{}
{'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/17841', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/240/']}
jdg_373351
stackexchange
llm_judgeable_groundtruth_similarity
10792576
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to use the java bindings for libsvm: http://www.csie.ntu.edu.tw/~cjlin/libsvm/ I have implemented a 'trivial' example which is easily linearly separable in y. The data is defined as: double[][] train = new double[1000][]; double[][] test = new double[10][];for (int i = 0; i < train.length; i++){ if (i+1 > (train.length/2)){ // 50% positive double[] vals = {1,0,i+i}; train[i] = vals; } else { double[] vals = {0,0,i-i-i-2}; // 50% negative train[i] = vals; } } Where the first 'feature' is the class and the training set is similarly defined. To train the model: private svm_model svmTrain() { svm_problem prob = new svm_problem(); int dataCount = train.length; prob.y = new double[dataCount]; prob.l = dataCount; prob.x = new svm_node[dataCount][]; for (int i = 0; i < dataCount; i++){ double[] features = train[i]; prob.x[i] = new svm_node[features.length-1]; for (int j = 1; j < features.length; j++){ svm_node node = new svm_node(); node.index = j; node.value = features[j]; prob.x[i][j-1] = node; } prob.y[i] = features[0]; } svm_parameter param = new svm_parameter(); param.probability = 1; param.gamma = 0.5; param.nu = 0.5; param.C = 1; param.svm_type = svm_parameter.C_SVC; param.kernel_type = svm_parameter.LINEAR; param.cache_size = 20000; param.eps = 0.001; svm_model model = svm.svm_train(prob, param); return model;} Then to evaluate the model I use: public int evaluate(double[] features) { svm_node node = new svm_node(); for (int i = 1; i < features.length; i++){ node.index = i; node.value = features[i]; } svm_node[] nodes = new svm_node[1]; nodes[0] = node; int totalClasses = 2; int[] labels = new int[totalClasses]; svm.svm_get_labels(_model,labels); double[] prob_estimates = new double[totalClasses]; double v = svm.svm_predict_probability(_model, nodes, prob_estimates); for (int i = 0; i < totalClasses; i++){ System.out.print("(" + labels[i] + ":" + prob_estimates[i] + ")"); } System.out.println("(Actual:" + features[0] + " Prediction:" + v + ")"); return (int)v;} Where the passed array is a point from the testing set. The results are always returning class 0.With the exact results being: (0:0.9882998314585194)(1:0.011700168541480586)(Actual:0.0 Prediction:0.0)(0:0.9883952943701599)(1:0.011604705629839989)(Actual:0.0 Prediction:0.0)(0:0.9884899803606306)(1:0.011510019639369528)(Actual:0.0 Prediction:0.0)(0:0.9885838957058696)(1:0.011416104294130458)(Actual:0.0 Prediction:0.0)(0:0.9886770466322342)(1:0.011322953367765776)(Actual:0.0 Prediction:0.0)(0:0.9870913229268679)(1:0.012908677073132284)(Actual:1.0 Prediction:0.0)(0:0.9868781382588805)(1:0.013121861741119505)(Actual:1.0 Prediction:0.0)(0:0.986661444476744)(1:0.013338555523255982)(Actual:1.0 Prediction:0.0)(0:0.9864411843906802)(1:0.013558815609319848)(Actual:1.0 Prediction:0.0)(0:0.9862172999068877)(1:0.013782700093112332)(Actual:1.0 Prediction:0.0) Can someone explain why this classifier is not working?Is there a step I have messed up, or a step I am missing? Thanks Now provide the response and nothing else.
it seems to me that your evaluate method is wrong. Should be something like this: public double evaluate(double[] features, svm_model model) { svm_node[] nodes = new svm_node[features.length-1]; for (int i = 1; i < features.length; i++) { svm_node node = new svm_node(); node.index = i; node.value = features[i]; nodes[i-1] = node; } int totalClasses = 2; int[] labels = new int[totalClasses]; svm.svm_get_labels(model,labels); double[] prob_estimates = new double[totalClasses]; double v = svm.svm_predict_probability(model, nodes, prob_estimates); for (int i = 0; i < totalClasses; i++){ System.out.print("(" + labels[i] + ":" + prob_estimates[i] + ")"); } System.out.println("(Actual:" + features[0] + " Prediction:" + v + ")"); return v;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10792576', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1220022/']}
jdg_373352
stackexchange
llm_judgeable_groundtruth_similarity
3735321
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a problem creating some form of hierarchy with different object types. I have a class which has a member of another class, like this: class A{public: A(){} ~A(){} void addB(B* dep){ child = dep; dep->addOwner(this); } void updateChild(){ child->printOwner(); } void print(){ printf("Printing..."); }private: B* child;}; And this is class B: class B{public: void addOwner(A* owner){ ownerObject = owner; } //ISNT WORKING void printOwner(){ ownerObject->print(); }private: A* ownerObject;}; Calling a function of "B" out of class "A" works just fine but trying it vice versa gives a compiler error because A is not defined in B. It actually is by using an include and a forward declaration, but I guess its a cross reference problem which the compiler can not solve. Is there any chance to solve this problem or should I rethink my design? Now provide the response and nothing else.
You say that you already solved your circular dependency problem by using a forward declaration of A instead of including the header where A is defined, so you already know how to avoid circular includes. However, you should be aware of what is possible and what is not with incomplete types (i.e. types that have been forward declared). In your case, you try to call the member function print on an object that has an incomplete type; the compiler knows nothing about this type excepts that it will be defined at some point, so it does not allow you to do this. The solution is to remove the implementation of the printOwner member function from the B header and put it into an implementation file: //B.hppclass A; // forward declarationclass B{ public: void addOwner(A* owner); void printOwner() const; // I think this member function could be const private: A* ownerObject;};//B.cpp#include "B.hpp"#include "A.hpp" // here we "import" the definition of Avoid B::addOwner(A * owner){ ownerObject = owner;}void B::printOwner() const{ ownerObject->print(); //A is complete now, so we can use its member functions} You could possibly do the same thing in the A header.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3735321', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/450556/']}
jdg_373353
stackexchange
llm_judgeable_groundtruth_similarity
31888115
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have lost the support for JavaScript files inside Visual Studio 2015 after updating the TypeScript tools from version 1.5.3 to 1.5.4. By support for JavaScript I mean intellisense, syntax coloring and the JavaScript options inside Options menu -> Text Editor. My js files look like txt files now. I have already tried many things: The steps in this answer: Visual Studio 2015 CTP - Javascript support missing Repair TypeScript Tools Repair Microsoft Asp.Net and Web Tools 2015 Repair entire Visual Studio 2015 Nothing worked. Does anyone knows a solution for this problem? Now provide the response and nothing else.
This is a known bug that we have fixed in VS 2015, however you could have been affected if you installed the "Universal Windows App Development Tools" during a specific period of time. We will soon be publishing a KB with the following workaround: If the β€œUniversal Windows App Development Tools” are still installed: Go to Programs and Features, select Visual Studio 2105, click Change. In Visual Studio setup, click Modify. Deselect the feature β€œUniversal Windows App Development Tools” Select β€œUniversal Windows App Development Tools” again, and click Update. If you have already uninstalled the β€œUniversal Windows App Development Tools”: Reinstall β€œUniversal Windows App Development Tools” Or, take the following steps to reinstall the JavaScript project system and language service: Download the installer for your edition of Visual Studio, e.g. vs_community.exe . Open a CMD window, and run the following command: vs_community.exe /modify /installselectableitems JavaScript_Hidden /passive Sorry for the trouble. Hope that helps.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31888115', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1454888/']}
jdg_373354
stackexchange
llm_judgeable_groundtruth_similarity
42261524
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can scroll to 200px using the following btn.addEventListener("click", function(){ window.scrollTo(0,200);}) But I want a smooth scroll effect. How do I do this? Now provide the response and nothing else.
2018 Update Now you can use just window.scrollTo({ top: 0, behavior: 'smooth' }) to get the page scrolled with a smooth effect. const btn = document.getElementById('elem');btn.addEventListener('click', () => window.scrollTo({ top: 400, behavior: 'smooth',})); #x { height: 1000px; background: lightblue;} <div id='x'> <button id='elem'>Click to scroll</button></div> Older solutions You can do something like this: var btn = document.getElementById('x');btn.addEventListener("click", function() { var i = 10; var int = setInterval(function() { window.scrollTo(0, i); i += 10; if (i >= 200) clearInterval(int); }, 20);}) body { background: #3a2613; height: 600px;} <button id='x'>click</button> ES6 recursive approach: const btn = document.getElementById('elem');const smoothScroll = (h) => { let i = h || 0; if (i < 200) { setTimeout(() => { window.scrollTo(0, i); smoothScroll(i + 10); }, 10); }}btn.addEventListener('click', () => smoothScroll()); body { background: #9a6432; height: 600px;} <button id='elem'>click</button>
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/42261524', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6275251/']}
jdg_373355
stackexchange
llm_judgeable_groundtruth_similarity
52470044
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suddenly when Syncing Gradle, I get this error: WARNING: API 'variant.getJavaCompile()' is obsolete and has been replaced with 'variant.getJavaCompileProvider()'. It will be removed at the end of 2019. For more information, see https://d.android.com/r/tools/task-configuration-avoidance Affected Modules: app I've got this build.gradle for the app module: apply plugin: 'com.android.application'apply plugin: 'kotlin-android'apply plugin: 'kotlin-android-extensions'apply plugin: 'com.google.gms.google-services'apply plugin: 'io.fabric'android { compileSdkVersion 28 buildToolsVersion "28.0.2" defaultConfig { applicationId "..." minSdkVersion 21 targetSdkVersion 28 versionCode 1 versionName "..." testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" versionNameSuffix = version_suffix [...] } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' [...] } debug { [...] } }}dependencies { implementation fileTree(include: ['*.jar'], dir: 'libs') implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:1.2.61" implementation 'androidx.appcompat:appcompat:1.0.0-rc02' implementation 'androidx.constraintlayout:constraintlayout:1.1.3' implementation "com.android.support:preference-v7:28.0.0" testImplementation 'junit:junit:4.12' androidTestImplementation 'androidx.test:runner:1.1.0-alpha4' androidTestImplementation 'androidx.test.espresso:espresso-core:3.1.0-alpha4' implementation 'com.google.android.material:material:1.0.0-rc02' [...]} I can compile the app correctly, but it's a bit bothering, and as I see it, something will stop working at the end of 2019. Any ideas of what is it and how to solve it? Now provide the response and nothing else.
I face this issue after updating to 3.3.0 If you are not doing what error states in gradle file, it is some plugin that still didn't update to the newer API that cause this. To figure out which plugin is it do the following (as explained in "Better debug info when using obsolete API" of 3.3.0 announcement ): Add 'android.debug.obsoleteApi=true' to your gradle.properties file which will log error with a more details Try again and read log details. There will be a trace of "problematic" plugin When you identify, try to disable it and see if issue is gone, just to be sure go to github page of plugin and create issue which will contain detailed log and clear description, so you help developers fix it for everyone faster be patient while they fix it, or you fix it and create PR for devs Hope it helps others
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/52470044', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5717211/']}
jdg_373356
stackexchange
llm_judgeable_groundtruth_similarity
1850653
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The Mean Absolute Deviation of the normal distribution is simply $$\sqrt{\frac{2}{\pi}}\sigma,$$ where $\sigma$ is the standard deviation of the normal distribution. ( Wikipedia , Mathworld .) How do I prove this? Now provide the response and nothing else.
Let $X\sim\mbox{N}\left(\mu,\sigma^{2}\right)$. So as usual the PDFis given by $$f_{X}(a)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\left(\frac{a-\mu}{\sigma\sqrt{2}}\right)^{2}}.$$ The mean absolute deviation is \begin{alignat*}{1}\mathbf{E}\left[\left|X-\mu\right|\right] & =\int_{-\infty}^{\infty}\left|a-\mu\right|f_{X}(a)da\\ & =\int_{-\infty}^{\mu}\left(\mu-a\right)f_{X}(a)da+\int_{\mu}^{\infty}\left(a-\mu\right)f_{X}(a)da\\ & \overset{1}{=}2\int_{\mu}^{\infty}\left(a-\mu\right)f_{X}(a)da\\ & =2\int_{\mu}^{\infty}\frac{a-\mu}{\sigma\sqrt{2\pi}}e^{-\left(\frac{a-\mu}{\sigma\sqrt{2}}\right)^{2}}da\\ & \overset{2}{=}\frac{2}{\sqrt{\pi}}\int_{0}^{\infty}be^{-b^{2}}\sigma\sqrt{2}db\\ & =2\sqrt{\frac{2}{\pi}}\sigma\int_{0}^{\infty}be^{-b^{2}}db\\ & =2\sqrt{\frac{2}{\pi}}\sigma\left[\frac{e^{-b^{2}}}{-2}\right]_{0}^{\infty}\\ & =\sqrt{\frac{2}{\pi}}\sigma\left[e^{0}-e^{-\infty}\right]\\ & =\sqrt{\frac{2}{\pi}}\sigma.\end{alignat*} $\overset{1}{=}$ uses: Normal distribution is symmetric about themean $\mu$. $\overset{2}{=}$ uses the substitution: $b=\frac{a-\mu}{\sigma\sqrt{2}}$.(Thus, $\sigma\sqrt{2}db=da$. Also, $a=\mu$ $\iff$ $b=0$.)
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1850653', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_373357
stackexchange
llm_judgeable_groundtruth_similarity
10694689
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: After reading How to initialize an array in C , in particular: Don't overlook the obvious solution, though: int myArray[10] = { 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 }; I tried something like this: #include <iostream>class Something {private:int myArray[10];public:Something() { myArray[10] = { 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 };}int ShowThingy(int what) { return myArray[what];}~Something() {}};int main () { Something Thing; std::cerr << Thing.ShowThingy(3);} And I get: ..\src\Something.cpp: In constructor 'Something::Something()':..\src\Something.cpp:10:48: error: cannot convert '<brace-enclosed initializer list>' to 'int' in assignment The obvious in this case is not so obvious. I really would like the initiation of my array to be more dynamic as well. I tired: private: int * myArray;public: Something() { myArray = new int [10]; myArray = { 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 };} This looked funky to me to, and so to the compiler: ..\src\Something.cpp: In constructor 'Something::Something()':..\src\Something.cpp:11:44: error: cannot convert '<brace-enclosed initializer list>' to 'int*' in assignment This also did not work: private:int myArray[10] = { 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 }; with: ..\src\Something.cpp:6:20: error: a brace-enclosed initializer is not allowed here before '{' token ..\src\Something.cpp:6:51: sorry, unimplemented: non-static data member initializers ..\src\Something.cpp:6:51: error: 'constexpr' needed for in-class initialization of static data member 'myArray' of non-integral type I have been doing really good and learning what does not work, but not so good learning what does work. So, how do I used initialization lists {value, value, value} for an array inside a class? I have been trying to figure out how to do this for some time now and am very stuck, I have a number of these kinds of lists I need to make for my app. Now provide the response and nothing else.
You need to initialize the array in the constructor initialization list #include <iostream>class Something {private:int myArray[10];public:Something(): myArray { 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 }{}int ShowThingy(int what) { return myArray[what];}~Something() {}};int main () { Something Thing; std::cerr << Thing.ShowThingy(3);} ..\src\Something.cpp:6:51: sorry, unimplemented: non-static data member initializers C++11 also adds supports for inline initialization of non-static member variables, but as the above error message states, your compiler has not implemented this yet.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10694689', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1261000/']}
jdg_373358
stackexchange
llm_judgeable_groundtruth_similarity
70071608
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my weather-app I am using npm run deploy command in terminal But it gives this error: Failed to get remote.origin.url (task must either be run in a git repository with a configured origin remote or must be configured with the "repo" option).npm ERR! code ELIFECYCLEnpm ERR! errno 1npm ERR! weather_app@0.1.0 deploy: `gh-pages -d build`npm ERR! Exit status 1npm ERR! npm ERR! Failed at the weather_app@0.1.0 deploy script.npm ERR! This is probably not a problem with npm. There is likely additional logging output above.npm ERR! A complete log of this run can be found in:npm ERR! /home/pankaj/.npm/_logs/2021-11-22T19_38_21_874Z-debug.log Also I don't know which command should i write to get access to my git repository which is already pushed on my github account... Now provide the response and nothing else.
This is a conflict between @apollo/client v3.5.4 and RN metro bundler. As a workaround until this issue is resolved, you can configure Metro by creating a metro.config.js file in the root of your React Native project with following content: const { getDefaultConfig } = require("metro-config");const { resolver: defaultResolver } = getDefaultConfig.getDefaultValues();exports.resolver = { ...defaultResolver, sourceExts: [ ...defaultResolver.sourceExts, "cjs", ],}; This workaround was posted on Apollo Github Releases page here .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/70071608', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17131683/']}
jdg_373359
stackexchange
llm_judgeable_groundtruth_similarity
68403
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have data for motor vehicle crashes by hour of the day. As you would expect, they are high in the middle of the day and peak at rush-hour. ggplot2's default geom_density smooths it out nicely A subset of the data, for drink-drive-related crashes, is high at either end of the day (evenings and early mornings) and highest at the extremes. But ggplot2's default geom_density still dips at the right-hand extreme. What to do about this? The aim is merely visualisation -- no need (is there?) for robust statistical analysis. x <- structure(list(hour = c(14, 1, 1, 9, 2, 11, 20, 5, 22, 13, 21, 2, 22, 10, 18, 0, 2, 1, 2, 15, 20, 23, 17, 3, 3, 16, 19, 23, 3, 4, 4, 22, 2, 21, 20, 1, 19, 18, 17, 23, 23, 3, 11, 4, 23, 4, 7, 2, 3, 19, 2, 18, 3, 17, 1, 9, 19, 23, 9, 6, 2, 1, 23, 21, 22, 22, 22, 20, 1, 21, 6, 2, 22, 23, 19, 17, 19, 3, 22, 21, 4, 10, 17, 23, 3, 7, 19, 16, 2, 23, 4, 5, 1, 20, 7, 21, 19, 2, 21) , count = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L)) , .Names = c("hour", "count") , row.names = c(8L, 9L, 10L, 29L, 33L, 48L, 51L, 55L, 69L, 72L, 97L, 108L, 113L, 118L, 126L, 140L, 150L, 171L, 177L, 184L, 202L, 230L, 236L, 240L, 242L, 261L, 262L, 280L, 284L, 286L, 287L, 301L, 318L, 322L, 372L, 380L, 385L, 432L, 448L, 462L, 463L, 495L, 539L, 557L, 563L, 566L, 570L, 577L, 599L, 605L, 609L, 615L, 617L, 624L, 663L, 673L, 679L, 682L, 707L, 730L, 733L, 746L, 754L, 757L, 762L, 781L, 793L, 815L, 817L, 823L, 826L, 856L, 864L, 869L, 877L, 895L, 899L, 918L, 929L, 937L, 962L, 963L, 978L, 980L, 981L, 995L, 1004L, 1005L, 1007L, 1008L, 1012L, 1015L, 1020L, 1027L, 1055L, 1060L, 1078L, 1079L, 1084L) , class = "data.frame")ggplot(x, aes(hour)) + geom_bar(binwidth = 1, position = "dodge", fill = "grey") + geom_density() + aes(y = ..count..) + scale_x_continuous(breaks = seq(0,24,4)) Happy for anyone with better stats vocabulary to edit this question, especially the title and tags. Now provide the response and nothing else.
To make a periodic smooth (on any platform), just append the data to themselves, smooth the longer list, and cut off the ends. Here is an R illustration: y <- sqrt(table(factor(x[,"hour"], levels=0:23)))y <- c(y,y,y)x.mid <- 1:24; offset <- 24plot(x.mid-1, y[x.mid+offset]^2, pch=19, xlab="Hour", ylab="Count")y.smooth <- lowess(y, f=1/8)lines(x.mid-1, y.smooth$y[x.mid+offset]^2, lwd=2, col="Blue") (Because these are counts I chose to smooth their square roots; they were converted back to counts for plotting.) The span in lowess has been shrunk considerably from its default of f=2/3 because (a) we are now processing an array three times longer, which should cause us to reduce $f$ to $2/9$, and (b) I want a fairly local smooth so that no appreciable endpoint effects show up in the middle third. It has done a pretty good job with these data. In particular, the anomaly at hour 0 has been smoothed right through.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/68403', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/26553/']}
jdg_373360
stackexchange
llm_judgeable_groundtruth_similarity
5156664
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I really like the ExpandoObject while compiling a server-side dynamic object at runtime, but I am having trouble flattening this thing out during JSON serialization. First, I instantiate the object: dynamic expando = new ExpandoObject();var d = expando as IDictionary<string, object>;expando.Add("SomeProp", SomeValueOrClass); So far so good. In my MVC controller, I want to then send this down as a JsonResult, so I do this: return new JsonResult(expando); This serializes the JSON into the below, to be consumed by the browser: [{"Key":"SomeProp", "Value": SomeValueOrClass}] BUT, what I'd really like is to see this: {SomeProp: SomeValueOrClass} I know I can achieve this if I use dynamic instead of ExpandoObject -- JsonResult is able to serialize the dynamic properties and values into a single object (with no Key or Value business), but the reason I need to use ExpandoObject is because I don't know all of the properties I want on the object until runtime , and as far as I know, I cannot dynamically add a property to a dynamic without using an ExpandoObject . I may have to sift through the "Key", "Value" business in my javascript, but I was hoping to figure this out prior to sending it to the client. Thanks for your help! Now provide the response and nothing else.
You could also, make a special JSONConverter that works only for ExpandoObject and then register it in an instance of JavaScriptSerializer. This way you could serialize arrays of expando,combinations of expando objects and ... until you find another kind of object that is not getting serialized correctly("the way u want"), then you make another Converter, or add another type to this one. Hope this helps. using System.Web.Script.Serialization; public class ExpandoJSONConverter : JavaScriptConverter{ public override object Deserialize(IDictionary<string, object> dictionary, Type type, JavaScriptSerializer serializer) { throw new NotImplementedException(); } public override IDictionary<string, object> Serialize(object obj, JavaScriptSerializer serializer) { var result = new Dictionary<string, object>(); var dictionary = obj as IDictionary<string, object>; foreach (var item in dictionary) result.Add(item.Key, item.Value); return result; } public override IEnumerable<Type> SupportedTypes { get { return new ReadOnlyCollection<Type>(new Type[] { typeof(System.Dynamic.ExpandoObject) }); } }} Using converter var serializer = new JavaScriptSerializer(); serializer.RegisterConverters(new JavaScriptConverter[] { new ExpandoJSONConverter()});var json = serializer.Serialize(obj);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5156664', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/199210/']}
jdg_373361
stackexchange
llm_judgeable_groundtruth_similarity
9834217
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was reading some tutorials and books about generic views . In part 4 of the official tutorial , they wrote an example like this from django.conf.urls import patterns, include, urlfrom django.views.generic import DetailView, ListViewfrom polls.models import Pollurlpatterns = patterns('', url(r'^$', ListView.as_view( queryset=Poll.objects.order_by('-pub_date')[:5], context_object_name='latest_poll_list', template_name='polls/index.html')), url(r'^(?P<pk>\d+)/$', DetailView.as_view( model=Poll, template_name='polls/detail.html')), url(r'^(?P<pk>\d+)/results/$', DetailView.as_view( model=Poll, template_name='polls/results.html'), name='poll_results'), url(r'^(?P<poll_id>\d+)/vote/$', 'polls.views.vote'),) I have also been reading The Definitive Guide to Django: Web Development Done Right, Second Edition and when they talked about generic views, they wrote their example like this from django.conf.urls.defaults import *from django.views.generic import list_detailfrom mysite.books.models import Publisherpublisher_info = { 'queryset': Publisher.objects.all(), 'template_name': 'publisher_list_page.html',}urlpatterns = patterns('', (r'^publishers/$', list_detail.object_list, publisher_info)) Should I be using ListView or list_detail ? They both come from django.views.generic . If they can both be used, then what's the difference (advantage and disadvantage comparison)? In case it helps, I'll explain my objective: In my project, I want to list work orders, and then I want a detailed view of each work order that will also contain a list of comments for that order (similar to comments per blog post). Now provide the response and nothing else.
ListView (class-based) is intended to replace object_list (function-based) because there is limited flexibility to extending a function's behavior. As the Django (1.4) documentation notes , the function-based generic views are deprecated in favor of the class-based versions. So use ListView , since Django removed object_list . Either way, I prefer to put all the customizations in views.py to avoid cluttering up urls.py, which tends to be a dumping ground of things.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9834217', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1150923/']}
jdg_373362
stackexchange
llm_judgeable_groundtruth_similarity
1564762
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to Guice and here is a naive question. I learned that we could bind String to a particular value through: bind(String.class) .annotatedWith(Names.named("JDBC URL")) .toInstance("jdbc:mysql://localhost/pizza"); But what if I want to bind String to any possible characters? Or I think it could be described this way: How can I replace "new SomeClass(String strParameter)" with Guice? Now provide the response and nothing else.
You first need to annotate the constructor for SomeClass : class SomeClass { @Inject SomeClass(@Named("JDBC URL") String jdbcUrl) { this.jdbcUrl = jdbcUrl; }} I prefer to use custom annotations, like this: class SomeClass { @Inject SomeClass(@JdbcUrl String jdbcUrl) { this.jdbcUrl = jdbcUrl; } @Retention(RetentionPolicy.RUNTIME) @Target({ElementType.FIELD, ElementType.PARAMETER}) @BindingAnnotation public @interface JdbcUrl {}} Then you need to provide a binding in your Module: public class SomeModule extends AbstractModule { private final String jdbcUrl; // set in constructor protected void configure() { bindConstant().annotatedWith(SomeClass.JdbcUrl.class).to(jdbcUrl); }} Then an time Guice creates SomeClass, it will inject the parameter. For instance, if SomeOtherClass depends on SomeClass: class SomeOtherClass { @Inject SomeOtherClass(SomeClass someClass) { this.someClass = someClass; } Often, when you think you want to inject a String, you want to inject an object. For instance, if the String is a URL, I often inject a URI with a binding annotation. This all assumes there is some constant value you can define at module creation time for the String. If the value isn't available at module creation time, you can use AssistedInject .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1564762', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/44512/']}
jdg_373363
stackexchange
llm_judgeable_groundtruth_similarity
562858
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I can set the divergence of $A$ as whatever I want, won't it affect Ampere's law: $$\nabla ^{2}A=-\mu_0J$$ I could set it to zero and that would mean $\nabla 0=0=J$ I have understood the proof given in Griffiths where we are able to find a scalar function using Poisson's equation which in turn proves that we can always make the vector potential divergenceless but I don't get how this is compatible with Ampere's Law Edit: I apologise for the inconvenience caused by $\nabla^{2}$ . I got confused over the definition of the Laplacian Now provide the response and nothing else.
Ampère's law in differential form (in the case of no time-varying fields) is simply: $$\mathbf{\nabla\times B} = \mu_0 \mathbf{j}.$$ Now, if we write it in terms of the vector potential $\mathbf{B} = \mathbf{\nabla \times A}$ , we get $$\mathbf{\nabla \times (\nabla \times A) = \mu_0 \mathbf{j}},$$ which we can further expand using the mathematical identity for the curl of the curl, so that $$-\nabla^2 \mathbf{A} + \nabla (\nabla \cdot \mathbf{A}) = \mu_0 \mathbf{j}.$$ Now, you're right that Gauge Invariance says that you have some liberty with the potentials. In particular, you can choose to set $\nabla\cdot \mathbf{A} = 0$ , in which case the above equation just becomes $$\nabla^2 \mathbf{A} = -\mu_0 \mathbf{j},$$ which is "Ampere's Law" in the Coulomb Gauge . Though that's a very bad way of describing it. It's basically the equation that allows us to calculate the components of the vector potential $\mathbf{A}$ . In other words, all the components of the vector potential satisfy "Poisson's Equation". However, you are right that in general in another gauge this would not be true. I'm not quite sure what you mean by the second part of your question: $\nabla \cdot \mathbf{A} = 0 \nRightarrow \nabla^2 \mathbf{A} = 0!$ The Laplacian is the divergence of the gradient, not the other way around... EDIT 1: If you'd like a simple example of the above, consider the very simple field $$\mathbf{A} = 3 x^2 y\,\, \mathbf{\hat{x}} - 3 x y^2 \,\, \mathbf{\hat{y}}.$$ It's a simple exercise to show that \begin{equation*}\begin{aligned}\nabla \cdot \mathbf{A} &= 0, \text{ but}\\\nabla^2 \mathbf{A} &= \nabla^2 A_x \,\,\mathbf{\hat{x}}+ \nabla^2 A_y \,\,\mathbf{\hat{y}}= 6 y \,\, \mathbf{\hat{x}} - 6 x \,\, \mathbf{\hat{y}} \neq 0.\end{aligned}\end{equation*} EDIT 2: I also disagree with @my2cts answer. The claim that this is true in the Lorenz Gauge does not seem right to me at all. In the Lorenz Gauge $\mathbf{A}$ and the scalar potential $\phi$ satisfy wave equations: $$-\frac{1}{c^2}\frac{\partial^2 \mathbf{A}}{\partial t^2} + \nabla^2 \mathbf{A} = - \mu_0 \mathbf{j}\\ -\frac{1}{c^2}\frac{\partial^2 \phi}{\partial t^2} + \nabla^2 \phi = - \frac{\rho}{\epsilon_0}\\$$ Now, it's true that if $\mathbf{A}$ does not depend explicitly on time, then this wave equation just reduces to Poisson's Equation given above. It is also true that when we are dealing with static Electric and Magnetic Fields one usually uses time-independent potentials (say $\mathbf{A}$ and $\phi$ ). However , there is no reason for us to only use time-independent potentials! Gauge invariance tells us that we should be able to write $$\mathbf{A'} = \mathbf{A} + \nabla\lambda(x,t), \\ \phi' = \phi - \frac{\partial \lambda}{\partial t}(x,t),$$ such that both $\mathbf{A'}$ and $\phi'$ will be time-dependent! (Of course, I have no doubt that it'd be a real pain to work with these potentials, but in principle, we could!) But importantly, in this case, the wave equation does not reduce to Poisson's Equation for $\mathbf{A'}$ .
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/562858', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/246073/']}
jdg_373364
stackexchange
llm_judgeable_groundtruth_similarity
13125105
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Windows 7, Intel CORE i3, 64 bit, RAM 4Gb, 2.27 GHz .NET Framework 4.0 I have the following code: static void Main(string[] args){ var timer = new Stopwatch(); timer.Start(); for (int i = 0; i < 0xFFF; ++i) { // I use one of the following line at time Task.Factory.StartNew(() => { }); new Thread(() => { }).Start(); } timer.Stop(); Console.WriteLine(timer.Elapsed.TotalSeconds); Console.ReadLine();} If I use Task the output is always less then 0.01 seconds , but if I use Thread the output is always greater than 40 seconds ! How is it possible? Why so much difference? Now provide the response and nothing else.
The two are not the same. When you use Task.Factory.StartNew , you're scheduling a task to run on the ThreadPool . When you make a new Thread , you're having to create and start a new thread. In the first case, the threads are already created and reused . This causes the overhead of scheduling the tasks to be far lower, as the threads don't have to be created each iteration. Note that the behavior is not the same, however. When creating a separate thread, each task is getting it's own thread. They will all get started right away. When using Task.Factory.StartNew , they're put into the scheduler to run on the ThreadPool , which will (potentially) limit the number of concurrent threads started. This is usually a good thing, as it prevents overthreading from occurring.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13125105', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1395740/']}
jdg_373365
stackexchange
llm_judgeable_groundtruth_similarity
234143
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Take a list with repetitions, say {a, a, b, c} , and some permutation thereof, say {c, a, b, a} . I want to know which permutations of the new list return it into the old one, in this case {2, 4, 3, 1} and {4, 2, 3, 1}} . The following code does this, but it is highly non-optimal: {a, a, b, c}RandomSample[%]Permutations[Range[Length[%]]]Pick[%, %%[[#]] === %%% & /@ %] Here we are checking for all possible permutations, which scales badly with the size of the list. How can this be improved? Is there a built-in (similar to FindPermutation ) that can help us? Perhaps if we first Tally the list? FWIW, any format for the output is allowed. A list of positions is fine, but also a Cycles object, or any other format that contains the relevant information. Now provide the response and nothing else.
Here's a quick-n-dirty idea, can be much further optimized, will revisit when/if I have time. f1 = Select[Tuples[PositionIndex[#2] /@ #1], Unequal @@ # &] &; Takes original list and permuted list as arguments, returns all permutation lists to return latter to former. A quick test: l1 = RandomInteger[4, 11]l2 = RandomSample@l1r2 = (perms = Permutations[Range[Length[l1]]]; Pick[perms, l2[[#]] === l1 & /@ perms]); //AbsoluteTiming // Firstr1 = f1[l1, l2]; // AbsoluteTiming // Firstr1 == r2 {1, 4, 4, 3, 2, 2, 4, 0, 3, 1, 3} {1, 3, 4, 2, 2, 3, 4, 0, 3, 1, 4} 90.2257 0.0197286 True As is, it can handle pretty large cases: l1 = {a, b, c, c, c, c, d, e, e, f, f, f, g, h, i, i, j, j, k, l, m, n, n};l2 = RandomSample[l1];Short[f1[l1, l2], 5] {{5,19,6,8,18,22,4,15,23,7,13,14,16,3,10,21,1,11,17,20,9,2,12},{5,19,6,8,18,22,4,15,23,7,13,14,16,3,10,21,1,11,17,20,9,12,2},<<2301>>,{5,19,22,18,8,6,4,23,15,14,13,7,16,3,21,10,11,1,17,20,9,12,2}} And a more efficient realization that can be orders of magnitude better in time and RAM: f2 = Module[{p1 = PositionIndex[#2] /@ #1 &[#1, #2], p2, p3}, p2 = Union[p1]; p3 = PositionIndex[p1] /@ p2; p1 = {p1}; Do[p1 = Join @@ (With[{t = #}, (ReplacePart[t, Thread[p3[[idx]] -> #]] & /@ Permutations[p2[[idx]]])] & /@ p1);, {idx, Length@p2}]; p1] &; Which will handle bigger cases: l1={a, b, c, c, c, c, d, e, e, f, f, f, g, h, i, i, j, j, k, l, m, n, n,n, o, o, o, o, p, q, r, s, s, s, s, s, t, u, v, w};l2={u, l, m, o, e, j, f, n, o, s, d, o, a, i, h, c, n, g, v, c, n, p, j,s, w, f, s, c, s, c, r, f, o, s, t, i, e, q, b, k};res=f2[l1,l2];Length[res]RandomSample[res, 3] 19906560 {{13,39,28,16,30,20,11,5,37,26,32,7,18,15,14,36,23,6,40,2,3,8,21,17,12,4,9,33,22,38,31,24,34,10,27,29,35,1,19,25},{13,39,30,28,16,20,11,5,37,32,7,26,18,15,14,36,23,6,40,2,3,8,17,21,33,9,4,12,22,38,31,27,34,10,29,24,35,1,19,25},{13,39,28,16,20,30,11,37,5,26,32,7,18,15,36,14,6,23,40,2,3,21,17,8,12,33,4,9,22,38,31,34,27,29,10,24,35,1,19,25}} And an even speedier way: f3 = Module[{start, pos, dispos, posdispos}, pos = PositionIndex[#1] /@ #2; dispos = Union[pos]; posdispos = PositionIndex[pos] /@ dispos; posdispos[[Ordering[dispos]]] // Flatten[Outer[Join, Sequence @@ (Permutations /@ #), 1], Length[#] - 1][[All, Ordering[Flatten[dispos]]]] & ]&;
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/234143', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/34893/']}
jdg_373366
stackexchange
llm_judgeable_groundtruth_similarity
22444
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In Exodus 12, the Israelites were instructed to put blood on their doors, to protect them from the death of the firstborn that the Egyptians would suffer. In most of Exodus 12, it sounds as though the Lord himself is the one who would pass through the land and strike the Egyptians, but "pass over" the Israelites. However in Exodus 12:23 many translations say that God would not permit "the destroyer" to enter and strike their firstborn, if the Israelites put blood on their doors. Some translations such as NLT say "the death angel". Often Christian teaching refers to the "death angel" in that incident. The main questions are: Was there really a "death angel" in that incident, or was the Lord himself striking the Egyptians? Should Exodus 12:23 be translated "destroyer" or "death angel", or alternatively would it be better translated "destruction"? Now provide the response and nothing else.
In Exo. 12:12-13, it is written, 12 For I will pass through the land of Egypt this night, and I will smite all the firstborn in the land of Egypt, both man and beast, and I will execute judgment against all the gods of Egypt. I am Yahveh. 13 And the blood shall be your sign upon the houses where you are. And when I see the blood, then I will [Χ•ΦΌΧ€ΦΈΧ‘Φ·Χ—Φ°ΧͺΦΌΦ΄Χ™] over you, and the destroyer’s plague shall not be among you when I smite the land of Egypt. Χ™Χ‘ Χ•Φ°Χ’ΦΈΧ‘Φ·Χ¨Φ°ΧͺΦΌΦ΄Χ™ בְא֢ר֢Χ₯ ΧžΦ΄Χ¦Φ°Χ¨Φ·Χ™Φ΄Χ Χ‘ΦΌΦ·ΧœΦΌΦ·Χ™Φ°ΧœΦΈΧ” Χ”Φ·Χ–ΦΌΦΆΧ” Χ•Φ°Χ”Φ΄Χ›ΦΌΦ΅Χ™ΧͺΦ΄Χ™ Χ›ΦΈΧœ Χ‘ΦΌΦ°Χ›Χ•ΦΉΧ¨ בְּא֢ר֢Χ₯ ΧžΦ΄Χ¦Φ°Χ¨Φ·Χ™Φ΄Χ ΧžΦ΅ΧΦΈΧ“ΦΈΧ Χ•Φ°Χ’Φ·Χ“ Χ‘ΦΌΦ°Χ”Φ΅ΧžΦΈΧ” Χ•ΦΌΧ‘Φ°Χ›ΦΈΧœ ΧΦ±ΧœΦΉΧ”Φ΅Χ™ ΧžΦ΄Χ¦Φ°Χ¨Φ·Χ™Φ΄Χ א֢גֱשׂ֢ה Χ©ΧΦ°Χ€ΦΈΧ˜Φ΄Χ™Χ אֲנִי Χ™Φ·Χ”Φ°Χ•ΦΆΧ” Χ™Χ’ Χ•Φ°Χ”ΦΈΧ™ΦΈΧ” הַדָּם ΧœΦΈΧ›ΦΆΧ לְאֹΧͺ גַל Χ”Φ·Χ‘ΦΌΦΈΧͺִּים אֲשׁ֢ר אַΧͺּ֢ם שָׁם וְרָאִיΧͺΦ΄Χ™ א֢Χͺ הַדָּם Χ•ΦΌΧ€ΦΈΧ‘Φ·Χ—Φ°ΧͺΦΌΦ΄Χ™ Χ’Φ²ΧœΦ΅Χ›ΦΆΧ Χ•Φ°ΧœΦΉΧ Χ™Φ΄Χ”Φ°Χ™ΦΆΧ” בָכ֢ם Χ ΦΆΧ’ΦΆΧ£ ΧœΦ°ΧžΦ·Χ©ΧΦ°Χ—Φ΄Χ™Χͺ Χ‘ΦΌΦ°Χ”Φ·Χ›ΦΌΦΉΧͺΦ΄Χ™ בְּא֢ר֢Χ₯ ΧžΦ΄Χ¦Φ°Χ¨ΦΈΧ™Φ΄Χ The narrative is a bit complex; however, the general idea is as follows. The Israelites sacrifice the Pesach offering. 1 The blood of this Pesach offering is caught in a basin, and a bunch of hyssop is used to apply the blood to the lintel and two side-posts of the Israelite homes. 2 Yahveh via Moses tells the Israelites that the blood applied to their door-posts in the specified manner will be β€œyour sign” [ΧœΦΈΧ›ΦΆΧ לְאֹΧͺ]. 3 Yahveh sees this sign and does not allow the destroyer [ΧžΦ·Χ©Φ°ΧΧ—Φ΄Χ™Χͺ] to come unto their houses to plague the Israelites. 4 The destroyer [Χ”Φ·ΧžΦ·Χ©Φ°ΧΧ—Φ΄Χ™Χͺ] is an entity that is to plague [ΧœΦ΄Χ Φ°Χ’ΦΉΦΌΧ£] 5 with a plague [Χ ΦΆΧ’ΦΆΧ£] 6 every firstborn in Egypt where Yahveh does not see the sign of the blood upon the door-posts. In fact, the plague is referred to as β€œthe destroyer’s plague” [Χ ΦΆΧ’ΦΆΧ£ ΧœΦ°ΧžΦ·Χ©ΧΦ°Χ—Φ΄Χ™Χͺ]. 7 Footnotes 1 Exo. 12:28 cf. Exo. 12:6, 12:21 2 Exo. 12:28 cf. Exo. 12:7, 12:22 3 Exo. 12:23 cf. Exo. 12:13 4 Exo. 12:23 cf. Exo. 12:13 5 Exo. 12:23 6 Exo. 12:13 7 Exo. 12:13. Granted, Carl Friedrich Keil (p. 19 ) commented, β€œ...there is no article with ΧœΧžΧ©ΧΧ—Χ™Χͺ.” He understands Χ ΦΆΧ’ΦΆΧ£ ΧœΦ°ΧžΦ·Χ©ΧΦ°Χ—Φ΄Χ™Χͺ as meaning β€œplague to destroy.” However, the article would be indicated by a dagesh (small dot) within the מ, like so מּ, and such [Masoretic] vowel pointing would not have been part of the original manuscript. Since it is written that Yahveh Himself will β€œpass through” (note: this is the verb Χ’ΦΈΧ‘Φ·Χ¨ avar ) the land of Egypt and smite all the firstborn in the land of Egypt, but this is actually accomplished via β€œthe destroyer” which plagues the firstborn of Egypt with a plague, we can reasonably conclude that the destroyer is Yahveh’s agent of destruction. Yonatan ben Uzziel suspects as much, as he interpreted the Hebrew into Aramaic as ΧžΧœΧΧ›Χ ΧžΧ—Χ‘ΧœΧ (β€œthe destroying angel”) in his targum . 8 Footnotes 8 Targum of Yonatan ben Uzziel, Exo. 12:23 Therefore, we have Yahveh and Yahveh’s destroyer. These are two separate entities. The latter is Yahveh’s agent which executes judgment upon Egypt and plagues the firstborn with the plague of death. A similar entity encountered in 1 Chr. 21:15 is referred to as β€œthe destroying 9 angel” [לַמַּלְאָךְ Χ”Φ·ΧžΦΌΦ·Χ©ΧΦ°Χ—Φ΄Χ™Χͺ] and executes judgment and destruction at God’s behest. Footnotes 9 or β€œdestroyer” Meredith G. Kline wrote, 10 Footnotes 10 Kline, p. 499 With that being said, it’s a bit easier to understand what is occurring during the final plague. As the destroyer is passing through Egypt, it is plaguing the firstborns with the plague of death, thus killing them. However, the destroyer is impeded from entering the houses of the Israelites only because Yahveh Himself sees the sign of the blood on their door-posts. When Yahveh sees this sign, He does not β€œpass over” the Israelites’ houses. If Yahveh were to simply pass over their houses, it would not impede the destroyer who could enter the houses after Yahveh passed over. (Therefore, the verb Χ€ΦΌΦΈΧ‘Φ·Χ— pasach does not really mean β€œpass over” when translated into English.) Yahveh, upon seeing the sign of the blood upon the door-posts, then covers (or hovers over, protects) these houses. When Yahveh covers the houses of the Israelites, the destroyer is not allowed to come unto the houses to plague the Israelites. Yahveh Himself is providing divine protection over these houses until all the firstborns of Egypt without divine protection have been plagued and killed by the destroyer. References Keil, Carl Friedrich. Commentary on the Old Testament. 1900. Reprint. Trans. Martin, James. Vol. 1. Grand Rapids: Eerdmans, 1986. Kline, Meredith G. β€œThe Feast of Cover-Over.” Journal of the Evangelical Theological Society . 37/4 (1994): 497–510.
{}
{'log_upvote_score': 5, 'links': ['https://hermeneutics.stackexchange.com/questions/22444', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/14131/']}
jdg_373367
stackexchange
llm_judgeable_groundtruth_similarity
23642032
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an array of users, and I'd like to update one of those users. users = [ {userId: 23, userName:"foo"}, {userId: 34, userName:"wrong"}, {userId: 45, userName:"baz"} {userId: 56, userName:"..."},]updatedUser = { userId: 34, userName: bar} I'm using underscorejs. I thought the simplest way is to find the index of the user to be updated, and then just set the value of that user to my updated value. Unfortunately, underscore's indexOf function doesn't accept properties, only values. To use it, I'd have to first user findWhere and then pass what that returns into indexOf: var valueOfUpdatedUser = _.findWhere(users, { userId: updatedUser.userId })var indexOfUpdatedUser = _.indexOf(users, valueOfUpdatedUser)users[indexOfUpdatedUser] = updatedUser; A second approach would be to use reject to remove the matched user, and then push my updated user to the array. Surely there's a better, simpler way? Now provide the response and nothing else.
You can use extend after findWhere . It's not technically the same as replacing the object with another instance entirely, but it eliminates the extra loop over the array: _.extend(_.findWhere(users, { userId: updatedUser.userId }), updatedUser); If this is still not satisfactory then your best bet is probably to iterate manually. I am deliberately leaving the "object properties keyed by user id" approach out of the discussion because in practice it's not uncommon to have an array in hand to begin with (e.g. the user array was retrieved by an API).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23642032', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1221537/']}
jdg_373368
stackexchange
llm_judgeable_groundtruth_similarity
45624859
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Allure2 with TestNG. I want to write my own listener which prints @Steps in the console output. I saw the interface "StepLifecycleListener" in allure but I am not able to implement this listener in TestNg. Any pointers ? @Overridepublic void beforeStepStart(final StepResult result) { System.out.format("Starting step: %s", result.getName());}@Overridepublic void afterStepStop(final StepResult result) { System.out.format("Completed step: %s", result.getName());} Now provide the response and nothing else.
Allure 2 listeners are managed by SPI mechanism. So there're several steps you need to do to make it works: Implement StepLifecycleListener interface and override required methods. Create META-INF/services folders in your project's resources root. Create a new file by the full name of this interface in the above folder. Add the full path to your implementation class into this file. You can find an example in the the following project: https://github.com/sskorol/allure2-testng-report/blob/master/src/test/resources/META-INF/services/io.qameta.allure.listener.StepLifecycleListener More info about SPI: http://docs.oracle.com/javase/tutorial/sound/SPI-intro.html
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45624859', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3290656/']}
jdg_373369
stackexchange
llm_judgeable_groundtruth_similarity
3584718
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My uni is closed because of the pandemic and I'm home learning calculus. There is one problem I am really not sure how is supposed to be solved: $$\log(x)+\log(\sqrt[3]{x})+\log(\sqrt[9]{x})+\log(\sqrt[27]{x})+\ldots=6$$ I know that I am supposed to show my attempts, but this is the first time I am seeing a problem like this and I have no idea how to solve it. Could you please help me? Now provide the response and nothing else.
Hint : $$log (x)+log(\sqrt[3]{x})+log(\sqrt[9]{x})+log(\sqrt[27]{x})+...= log (x)+ \frac 1 3 log(x)+ \frac 1 9log(x)+ \frac 1 {27} log(x)+... $$ Full answer : Let's massage the expression. This equality is given by geometric sum, and taking the limit of the expression. $$ log (x)+log(\sqrt[3]{x})+ ... = \log(x) \sum_0^{\infty} \frac 1 {3^n} = \frac 3 2 \log(x) $$ so finally, you're simply searching for $$ \log(x) = 4 $$ Can you find the answer to that ?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3584718', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_373370
stackexchange
llm_judgeable_groundtruth_similarity
50578690
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I was to create a windows service, which in the background was making calls to the database and populating various objects. Is it then possible to access these objects from a standalone C# application? If so, how would I do this? Thanks Now provide the response and nothing else.
You have to implement a custom QTabBar overwriting the tabSizeHint() and paintEvent() methods as shown below: #include <QApplication>#include <QStyleOptionTab>#include <QStylePainter>#include <QTabBar>#include <QTabWidget>class TabBar: public QTabBar{public: QSize tabSizeHint(int index) const{ QSize s = QTabBar::tabSizeHint(index); s.transpose(); return s; }protected: void paintEvent(QPaintEvent * /*event*/){ QStylePainter painter(this); QStyleOptionTab opt; for(int i = 0;i < count();i++) { initStyleOption(&opt,i); painter.drawControl(QStyle::CE_TabBarTabShape, opt); painter.save(); QSize s = opt.rect.size(); s.transpose(); QRect r(QPoint(), s); r.moveCenter(opt.rect.center()); opt.rect = r; QPoint c = tabRect(i).center(); painter.translate(c); painter.rotate(90); painter.translate(-c); painter.drawControl(QStyle::CE_TabBarTabLabel,opt); painter.restore(); } }};class TabWidget : public QTabWidget{public: TabWidget(QWidget *parent=0):QTabWidget(parent){ setTabBar(new TabBar); setTabPosition(QTabWidget::West); }};int main(int argc, char *argv[]){ QApplication a(argc, argv); TabWidget w; w.addTab(new QWidget, "tab1"); w.addTab(new QWidget, "tab2"); w.addTab(new QWidget, "tab3"); w.show(); return a.exec();}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50578690', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3046756/']}
jdg_373371
stackexchange
llm_judgeable_groundtruth_similarity
29741922
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Service Worker seems to automatically stop at some point. This behaviour unintentionally closes the WebSocket connection established on activate. When and Why does it stop? How can I programmatically disable this unexpected action to keep Service Worker stay running? Now provide the response and nothing else.
What you're seeing is the expected behavior, and it's not likely to change. Service workers intentionally have very short lifespans. They are "born" in response to a specific event ( install , activate , message , fetch , push , etc.), perform their task, and then "die" shortly thereafter. The lifespan is normally long enough that multiple events might be handled (i.e. an install might be followed by an activate followed by a fetch ) before the worker dies, but it will die eventually. This is why it's very important not to rely on any global state in your scripts, and to bootstrap any state information you need via IndexedDB or the Cache Storage API when your service worker starts up. Service workers are effectively background processes that get installed whenever you visit certain web pages. If those background processes were allowed to run indefinitely, there's an increased risk of negative impact on battery and performance of your device/computer. To mitigate this risk, your browser will only run those processes when it knows it's necessary, i.e. in response to an event. A use case for WebSockets is having your client listen for some data from the server. For that use case, the service worker-friendly alternative to using WebSockets is to use the Push Messaging API and have your service worker respond to push events. Note that in the current Chrome implementation, you must show a user-visible notification when handling a push event. The "silent" push use case is not supported right now. If instead of listening to data from the server, you were using WebSockets as a way of sending data from your client to your server, there's unfortunately no great service worker-friendly way of doing that. At some point in the future, there may be a way of registering your service worker to be woken up via a periodic/time-based event at which point your could use fetch() to send data to the server, but that's currently not supported in any browsers. P.S.: Chrome (normally) won't kill a service worker while you have its DevTools interface open, but this is only to ease debugging and is not behavior you should rely on for a real application.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/29741922', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3247703/']}
jdg_373372
stackexchange
llm_judgeable_groundtruth_similarity
4640989
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm wondering how static member variables are typically implemented in languages like C++ and if their use affects the size of instantiated objects. I know that a static members are shared by all instances of that class, but how is it shared? If it affects object size, would having 10 static variables add more size than 1? I'm asking because I can think of two ways it might be implemented: adding a pointer to static data to each object similar to the way some implementations add a pointer to the virtual function table the static data is just referenced directly like a global variable with the offset being resolved by the linker / loader Now provide the response and nothing else.
In C++, static members don't belong to the instances of class. they don't increase size of instances and class even by 1 bit! struct A{ int i; static int j;};struct B{ int i;};std::cout << (sizeof(A) == sizeof(B)) << std::endl; Output: 1 That is, size of A and B is exactly same. static members are more like global objects accessed through A::j . See demonstration at ideone : http://www.ideone.com/YeYxe $9.4.2/1 from the C++ Standard (2003), A static data member is not part of the subobjects of a class. There is only one copy of a static data member shared by all the objects of the class. $9.4.2/3 and 7 from the Standard, once the static data member has been defined, it exists even if no objects of its class have been created. Static data members are initialized and destroyed exactly like non-local objects (3.6.2, 3.6.3). As I said, static members are more like global objects!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4640989', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/71074/']}
jdg_373373
stackexchange
llm_judgeable_groundtruth_similarity
386060
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: A colleague and I are working on a problem and part of it comes down to evaluating the residue of a rational function. In particular, $$\mathrm{Res} \left( z^{kn-1} \left( az^{m}+1 \right)^{-k}; r \right),$$ where $a$ , $k$ , $m$ and $n$ are positive integers satisfying $a \geq 2$ and $0<m<n$ and $r$ is any $m$ -th root of $-1/a$ . The residue appears to have a nice form, $r^{kn}/m^{k}$ times a polynomial in $m$ and $n$ (of total degree $k-1$ , it seems), and we have been able to prove this for $k=1$ and $2$ using series expansions, etc. But this becomes increasingly complicated and messy for larger $k$ and we have not been able to find any general pattern to help us along the way. So our question is whether readers have seen residue problems for such rational functions or know of techniques that could help us to prove the value of this residue for any positive integer $k$ . Now provide the response and nothing else.
We want to calculate $$\rho(k,n,m)=\operatorname*{res}_{w=1}\left(\frac{w^n}{1-w^m}\right)^k\frac{dw}{w}. $$ If $kn$ is divisible by $m$ then it seems that $\rho(k,n,m)=-\binom{-k}{kn/m-k}/m$ . This is because in this case the residues at all $m$ 'th roots of unity are the same, and the sum of those residues is minus the residue at $\infty$ , which is easily calculated by the substitution $w=t^{-1}$ and the binomial expansion of $(1-t^m)^{-k}$ . I have checked this in Maple for a range of cases. I don't know if this method can be adapted to the case where $kn$ is not divisible by $m$ . One can also check experimentally that the denominator and numerator of $\rho(k,n,m)$ are large, but their factorisation only involves fairly small primes $p$ , certainly with $p<knm$ . This typically indicates that the function can be expressed in terms of binomial coefficients and factorials, rather than general polynomials.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/386060', 'https://mathoverflow.net', 'https://mathoverflow.net/users/175660/']}
jdg_373374
stackexchange
llm_judgeable_groundtruth_similarity
301364
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for a simple (i.e. as just a rough outline with little differential geometry) definition/ explanation of what a diffeomorphism is. I tried reading the Wiki page but it made no sense to me as a physicist. To give some context, I'm reading a book which says that because some property of a tensor field $h_{\mu\nu}(x)$ is preserved under infinitesimal transformation of coordinates by the fields $\xi_{\mu}(x)$, then these diffeomorphisms are a symmetry of the theory. Now provide the response and nothing else.
A diffeomorphism is typically presented as a smooth, differentiable, invertible map between manifolds (or rather, between points on one manifold to points on another manifold). For example, take two sheets of paper and curl one of them up. There exists a diffeomorphism that relates points on the two sheets. It sounds like you might be learning about Killing vectors. Changes in coordinates can be considered diffeomorphisms--instead of a passive relabeling of points, you are actively deforming spacetime into another shape, but one that is changed only by the coordinate transformation. Usually, tensors follow a strict transformation law under coordinate system transformations, but Killing vectors correspond to a symmetry in which the transformation law yields no change. A good example would be translational symmetry in Euclidean space. You can move a system any way you like, and aside from the points being relabeled, the fields themselves don't change.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/301364', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_373375
stackexchange
llm_judgeable_groundtruth_similarity
2190349
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to translate the following code d = {}d[0] = None into C++ with boost.python boost::python::dict d;d[0] = ?None How can I get a None object in boost.python? Now provide the response and nothing else.
There is no constructor of boost::python::object that takes a PyObject* (from my understanding, a ctor like that would invalidate the whole idea if mapping Python types to C++ types anyway, because the PyObject* could be anything). According to the documentation : object(); Effects: Constructs an object managing a reference to the Python None object.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2190349', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/134397/']}
jdg_373376
stackexchange
llm_judgeable_groundtruth_similarity
9807082
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We are using ColdFusion 8. We are using CFAJAXPROXY. We just discovered that the tag that ColdFusion inserts in the head tag has an issue. It works fine on page that have a head tag like this: <head> <title></title></head> But, it does NOT work on pages where there is an ID in the head tag, like this: <head id="SomeID"> <title></title></head> What happens is that CFJAXPROXY inserts the script inside the head tag, not withing the open and close head tags, which blows up the page, like this: <head <script type="text/javascript"></script> <title></title></head> The problem goes away immediately if the ID is removed from the head tag. The question is whether this is a known CFAJAXPROXY bug? Is there a way to tell CFAJAXPROXY put the script within the head area but not in the head tag? Now provide the response and nothing else.
For plugin activations, the 'admin_notices' hook cannot be used directly, because there is a redirect. A workaround is to store your notice in the options table and check for it next time. Also, if you also want to cover plugin upgrades as well as activations, you will need to use another hook, such as 'admin_init' (since WP 3.1, see http://make.wordpress.org/core/2010/10/27/plugin-activation-hooks/ ). Here is a complete sample plugin handling both activation and upgrade. I made the deferred notice an array so you can stack them up. <?php/*Plugin Name: My Plugin*/register_activation_hook(__FILE__, 'my_plugin_activation');function my_plugin_activation() { $notices= get_option('my_plugin_deferred_admin_notices', array()); $notices[]= "My Plugin: Custom Activation Message"; update_option('my_plugin_deferred_admin_notices', $notices);}add_action('admin_init', 'my_plugin_admin_init');function my_plugin_admin_init() { $current_version = 1; $version= get_option('my_plugin_version'); if ($version != $current_version) { // Do whatever upgrades needed here. update_option('my_plugin_version', $current_version); $notices= get_option('my_plugin_deferred_admin_notices', array()); $notices[]= "My Plugin: Upgraded version $version to $current_version."; update_option('my_plugin_deferred_admin_notices', $notices); }}add_action('admin_notices', 'my_plugin_admin_notices');function my_plugin_admin_notices() { if ($notices= get_option('my_plugin_deferred_admin_notices')) { foreach ($notices as $notice) { echo "<div class='updated'><p>$notice</p></div>"; } delete_option('my_plugin_deferred_admin_notices'); }}register_deactivation_hook(__FILE__, 'my_plugin_deactivation');function my_plugin_deactivation() { delete_option('my_plugin_version'); delete_option('my_plugin_deferred_admin_notices'); } UPDATE: There's also a common way to use set_transient() instead of update_option() , and to direct messages to the correct admin user. This post concerns metaboxes, not plugin activation, but the techniques work the same just about everywhere in Dashboard, as far as I know: https://wordpress.stackexchange.com/questions/15354/passing-error-warning-messages-from-a-meta-box-to-admin-notices
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9807082', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/651924/']}
jdg_373377
stackexchange
llm_judgeable_groundtruth_similarity
2158472
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This number : $2305843008139952128$ is perfect as shown here and it was proved that is a perfect number by Euler without using computer , then My question here is: How Euler showed that number :$2305843008139952128$ is perfect without using computer calculation ?. Now provide the response and nothing else.
Euler was the one to prove that all even perfect numbers are of the form$$2^{n-1} ( 2^n - 1 )$$This is known as the Euclid-Euler Theorem , and says that we just have to prove that $2^n-1$ is prime. Your number is$$2^{30} ( 2^{31} - 1 )$$Which Euler got by just plugging integer $n$ into the equation above. Now, we have that$$2^{31}-1=2\,147\,483\,647$$is small enough to be checked for primality by hand. Euler did this by utilizing modular properties of the Mersenne Primes , as @Chappers and others note in the comments above. In essence Euler lowered the number of cases to check considerably by noting that if $2^{31}-1$ was composite with divisor $p$ then either $p \equiv 1 \pmod {248}$ or $p \equiv 63 \pmod {248}$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2158472', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/156150/']}
jdg_373378
stackexchange
llm_judgeable_groundtruth_similarity
690664
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here is the model (I am using SQLLite3): class School < ActiveRecord::Base validates_uniqueness_of :nameend For example, after I add "Yale", I cannot add "Yale" but can add "yale." How can I make the validation case insensitive? EDIT: Found it - Active Record Validations Now provide the response and nothing else.
validates_uniqueness_of :name, :case_sensitive => false does the trick, but you should keep in mind that validates_uniqueness_of does not guarantee uniqueness if you have multiple servers/server processes (e.g. running Phusion Passenger, multiple Mongrels, etc) or a multi-threaded server. That's because you might get this sequence of events (the order is important): Process A gets a request to create a new user with the name 'foo' Process B does the same thing Process A validates the uniqueness of 'foo' by asking the DB if that name exists yet and the DB says the name doesn't exist yet. Process B does the same thing and gets the same response Process A submits the insert statement for the new record and succeeds If you have a database constraint requiring uniqueness for that field, Process B will submit the insert statement for the new record and fail with a ugly server exception that comes back from the SQL adapter. If you do not have a database constraint, the insert will succeed and you now have two rows with 'foo' as the name. See also "Concurrency and integrity" in the validates_uniqueness_of Rails documentation. From Ruby on Rails 3rd Edition : ...despite its name, validates_uniqueness_of doesn’t really guarantee that column values will be unique. All it can do is verify that no column has the same value as that in the record being validated at the time the validation is performed. It’s possible for two records to be created at the same time, each with the same value for a column that should be unique, and for both records to pass validation. The most reliable way to enforce uniqueness is with a database-level constraint." See also this programmer's experience with validates_uniqueness_of . One way this commonly happens is accidental double-submissions from a web page when creating a new account. This is a hard one to solve because what the user will get back is the second (ugly) error and it will make them think their registration failed, when in reality it succeeded. The best way I've found to prevent this is just to use javascript to try to prevent double-submission.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/690664', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/74980/']}
jdg_373379
stackexchange
llm_judgeable_groundtruth_similarity
21838436
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to use the new (ES6) Map objects in order to represent a map between properties and a value. I have objects in a form similar to: {key1:value1_1,key2:value2_1},..... {key1:value1_N,key2:value2_N} I want to group them based on both their key1 and key2 value. For example, I want to be able to group the following by x and y : [{x:3,y:5,z:3},{x:3,y:4,z:4},{x:3,y:4,z:7},{x:3,y:1,z:1},{x:3,y:5,z:4}] And obtain a Map containing: {x:3,y:5} ==> {x:3,y:5,z:3},{x:3,y:5,z:4}{x:3,y:4} ==> {x:3,y:4,z:4},{x:3,y:4,z:7}{x:3,y:1} ==> {x:3,y:1,z:1} In Python, I'd use tuples as dictionary keys. ES6 map allow arbitrary objects as keys but use the standard equality algorithm ( === ) so objects are only equal by reference from what I can tell. How can I accomplish this sort of grouping using ES6 maps? Alternatively, a solution using normal JS objects if there is an elegant way I overlooked. I'd rather not use an external collections library - but if there is a better solution using one I'm interested in learning about it too. Now provide the response and nothing else.
Ok, I've raised the issue on esdiscuss now and I got an answer from Mozilla's Jason Orendorff : This is a problem with ES6 maps. The solution will come in the form of ES7 value objects for keys instead of objects. It was considered before to let people specify .equals and .hashCode but it was rejected in favor of value objects. (for good reasons in my opinion). The only solution as of now is to roll your own collection. A basic such collection (concept, don't use in production code) was offered by Bradley on the ESDiscuss thread and might look something like this: function HashMap(hash) { var map = new Map; var _set = map.set; var _get = map.get; var _has = map.has; var _delete = map.delete; map.set = function (k,v) { return _set.call(map, hash(k), v); } map.get = function (k) { return _get.call(map, hash(k)); } map.has = function (k) { return _has.call(map, hash(k)); } map.delete = function (k) { return _delete.call(map, hash(k)); } return map;}function TupleMap() { return new HashMap(function (tuple) { var keys = Object.keys(tuple).sort(); return keys.map(function (tupleKey) { // hash based on JSON stringification return JSON.stringify(tupleKey) + JSON.stringify(tuple[tupleKey]); }).join('\n'); return hashed; });} A better solution is to use something like MontageJS/Collections which allows for specification of hash/equals functions. You can see the API docs here .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21838436', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1348195/']}
jdg_373380
stackexchange
llm_judgeable_groundtruth_similarity
90784
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I successfully configured PulseAudio server and client to send audio over network.It uses direct connection: http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Network/#index1h2 I'd like to have a possibility to switch between client and server sound card i.e. temporarily disable network stream and go back to internal sound device. Using module-tunnel-sink I could simply move sink-input to desired device but is not an option since it doesn't work well with Flash: they lead me to believe that Flash is somehow sending the sound to PulseAudio in such a way that it creates a lot of network traffic (think lots of tiny packets, not bandwidth); this overwhelms the network "tunnel" PulseAudio With direct connection I have to restart the application every time I want to switch the output. Any idea how can I solve this? Now provide the response and nothing else.
sudo ip link set wlan0 up or sudo ifconfig wlan0 up . Answer from Apr 13'17: To elaborate on the answer by Martin: ifup and ifdown commands are part of ifupdown package , which now is considered a legacy frontend for network configuration , compared to newer ones, such as network manager . Upon ifup ifupdown reads configuration settings from /etc/network/interfaces ; it runs pre-up , post-up and post-down scripts from /etc/network , which include starting /etc/wpasupplicant/ifupdown.sh that processes additional wpa-* configuration options for wpa wifi, in /etc/network/interfaces (see zcat /usr/share/doc/wpasupplicant/README.Debian.gz for documentation). For WEP wireless-tools package plays similar role to wpa-supplicant . iwconfig is from wireless-tools , too. ifconfig at the same time is a lower level tool , which is used by ifupdown and allows for more flexibility. For instance, there are 6 modes of wifi adapter functioning and IIRC ifupdown covers only managed mode (+ roaming mode, which formally isn't mode?). With iwconfig and ifconfig you can enable e.g. monitor mode of your wireless card, while with ifupdown you won't be able to do that directly. ip command is a newer tool that works on top of netlink sockets , a new way to configure the kernel network stack from userspace (tools like ifconfig are built on top of ioctl system calls).
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/90784', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/47146/']}
jdg_373381
stackexchange
llm_judgeable_groundtruth_similarity
103561
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The question is simple. When we join the two broken surfaces, what is it that keeps the surfaces from connecting with each other, while earlier they were attached to each other? Also, would the two sides join again if I hold them together for an infinite amount of time? Now provide the response and nothing else.
You are thinking of a lego or a jigsaw puzzle. You have to think of huge numbers and tiny dimensions . There are $10^{23}$ molecules in a mole. The atomic dimensions are less than $10^{-9}$ meters. The lattice that has been broken will not fit the puzzle if it has been moved further than this last small distance because the molecular forces will not match. In addition the break will have displaced the locations of atoms and molecules on the two sides, in this microscopic distance making fusion even more improbable.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/103561', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/42420/']}
jdg_373382
stackexchange
llm_judgeable_groundtruth_similarity
12331962
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: It seems (and maybe I'm wrong) that if you want to preserve the timezone of when something happened with JDBC and Postgres you need to store the timezone separately from the timestamp. That is I would prefer to give my ORM/JDBC/JPA a Java Calendar (or Joda DataTime ) with say timezone America/New_York to a Postgres timestampz field. AND I would expect on retrieval regardless of the Servers timezone (or defaulting to UTC) to give me back a Calendar with timezone America/New_York . But just looking at most JDBC code (and things that depend on it that doesn't happen). Is this correct? This seems ridiculous that I would need to store the tz in another field when postgres supports it. Thus it seems like the only two options are: Select the timestampz Postgres column as a java.util.String and parse it. Store the timezone as a separate field. Option number one and two one would require some sort of conversion interceptors for my SQL mapping / ORM libraries. What is the best solution for JDBC ? What is the best solution for JPA (if different than JDBC)? Now provide the response and nothing else.
When you store a timestamp with time zone ( timestamptz ) it's converted to UTC for storage in the DB. When retrieved, it's converted to the client's current timezone, not the timezone it was originally in. It's a point in time, basically. There is also timestamp without time zone ( timestamp ). This is not subject to conversion, but does not carry a timestamp with it. If you store a timestamp with your client time zone set to UTC, then retrieve it when the client time zone is '+08:00', you get the same value. That's half what you want, in that it preserves the raw time value. The names and behaviours are awful and confusing, but set by the SQL standard. You must store the time zone separately if you wish to record a point in time at a particular time zone. I'd recommend storing it as an INTERVAL with a CHECK constraint limiting it to be colname BETWEEN INTERVAL '-12' HOUR + INTERVAL '1' SECOND AND INTERVAL '12' HOUR . That definition rejects -12:00 and accepts +12:00; I'm not totally sure that's right, so check. You could either store the timestamp of local time at that time zone (what I'd probably do), or store the timestamptz of the UTC time when the event occurred plus an offset that lets you convert it to local time. Either will work fine for JDBC. For JPA, it'll depend on how well your provider understands and maps interval types. Ideally you want a transient generated field in your entity that reconstructs the Calendar instance you want using the timestamp and interval stored in the database.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12331962', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/318174/']}
jdg_373383
stackexchange
llm_judgeable_groundtruth_similarity
30341268
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The problem in Html.ActionLink() is that you can't add additional html content inside the tag that it generates.For example, if you want to add an icon besides the text like: <a href="/Admin/Users"><i class="fa fa-users"></i> Go to Users</a> Using Html.ActionLink(), you can only generate: <a href="/Admin/Users">Go to Users</a> So, to resolve this, you can use Url.Action() to generate only the URL inside the tag like: // Here, Url.Action could not generate the URL "/admin/users". So this doesn't work.<a href="@Url.Action("", "Users", "Admin")"><i class="fa fa-usesr"></i> Go to Users</a>// This works, as we know it but won't pass the Area needed.<a href="@Url.Action("", "Users")"><i class="fa fa-users"></i> Go to Users</a> So, how do you pass the Area using Url.Action()? Now provide the response and nothing else.
You can use this Url.Action("actionName", "controllerName", new { Area = "areaName" }); Also don't forget to add the namespace of the controller to avoid a conflict between the admin area controller names and the site controller names. Something like this public override void RegisterArea(AreaRegistrationContext context) { context.MapRoute( "Admin_default", "Admin/{controller}/{action}/{id}", new { action = "Index", id = UrlParameter.Optional }, new[] { "Site.Mvc.Areas.Admin.Controllers" } ); }
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/30341268', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3936053/']}
jdg_373384
stackexchange
llm_judgeable_groundtruth_similarity
2937547
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to calculate the 100th derivative of $$f(x) = \dfrac{1}{1+x^2}$$ at $x=0$ . So far, I've had only found a way to do it, and that is rewriting it as $$f(x) = \dfrac{1}{2i} \bigg(\dfrac{1}{x-i} + \dfrac{1}{x+i}\bigg)$$ and using higher derivative formula for each term in the parentheses. However, my teacher didn't allow the use of complex numbers in calculating derivatives, as we only defined the derivative in the set of real number (with the definition of limit), although I knew that the final number would always be a real number. I tried to set $x=\tan(t)$ , but it also didn't work, because I don't know if there is a chain rule for higher order derivatives. Is there any way to solve this problem without using complex numbers? Now provide the response and nothing else.
Notice that since $\sum x^n = \frac{1}{1-x} $ , then $$ \frac{1}{1+x^2} = \sum_{n=0}^{\infty} (-1)^n x^{2n} $$ Therefore, $f^{(100)}(0) = 100! \cdot a_{100} = 100! $
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2937547', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/516480/']}
jdg_373385
stackexchange
llm_judgeable_groundtruth_similarity
381970
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This tag has been burninated. Please do not recreate it. If you need advice on which tag to use, see the answer below. If you see this tag reappearing, it may need to be blacklisted. I've just discovered the relative tag. Its own excerpt describes clearly its meaning: Something having, or standing in, some relation to something else. Its tag info page is all about CSS Relative positioning instead. The tag has 1,041 questions covering a wide range of topics: python relative imports css relative positioning relative date formatting relative path and urls ... and more relative questions Burnination criteria list: Does it describe the contents of the questions to which it is applied? and is it unambiguous? It only partially describes the contents of the questions, due to its ambiguity, and its meaning depends completely on the other tags used in the question. It doesn't do anything in most cases because the title already provides the information . Is the concept described even on-topic for the site? It may be on topic, but by itself it doesn't narrow down the specific meaning enough. Does the tag add any meaningful information to the post? No. Does it mean the same thing in all common contexts? No, because its meaning is relative to the context. Now provide the response and nothing else.
I agree; this tag should be burninated. For CSS, at least, not only is there relative , but there's also absolute , fixed , and the minimally-less problematic sticky , all of which are covered by css-position . So, my suggestion for replacements: CSS questions - replace relative , absolute , sticky (where appropriate (e.g. they're using position: sticky )), fixed with css-position , as well as replacing position with css-position anywhere you find the relative tag in concert with position . Relative path questions (including Python relative imports) - replace relative and probably relative-url (100 questions) with relative-path . Relative date formatting - there's a relative-date tag but it has little usage and no wiki/excerpt information... not sure how to proceed there, or if it's even worth having that tag. All others (based on the first 6 pages of questions) - just remove the relative tag.
{}
{'log_upvote_score': 5, 'links': ['https://meta.stackoverflow.com/questions/381970', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/3340702/']}
jdg_373386
stackexchange
llm_judgeable_groundtruth_similarity
56930
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: What does the classical proof of the proposition "there exists irrational numbers a, b such that $a^b$ is rational" want to reveal? I know it has something to do with the difference between classical and constructive mathematics but am not quite clear about it. Materials I found online does not give quite clear explanations either. Could someone give a better explanation? Now provide the response and nothing else.
Presumably, the proof you have in mind is to use $a=b=\sqrt2$ if $\sqrt2^{\sqrt2}$ is rational, and otherwise use $a=\sqrt2^{\sqrt2}$ and $b=\sqrt 2$. The non-constructivity here is that, unless you know some deeper number theory than just irrationality of $\sqrt 2$, you won't know which of the two cases in the proof actually occurs, so you won't be able to give $a$ explicitly, say by writing a decimal approximation.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/56930', 'https://mathoverflow.net', 'https://mathoverflow.net/users/13012/']}
jdg_373387
stackexchange
llm_judgeable_groundtruth_similarity
115216
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am building a small dc motor that is driven by a 2v AA battery. I want the potentiometer to adjust the speed of the motor by increasing and reducing the current through the motor. For this i am using a potentiometer, but the current i have is for very high resistance. What are the common ranges potentiometers operate at? Does it exist a potentiometer that ranges from say 50 ohm - 0 ohm? Or maybe somewhere similar to that? Like the standard range for those meters with the lowest resistance. Now provide the response and nothing else.
Potentiometers with R = 10Ξ© have 0-10Ξ© range. It's always from 0 to nominal R (+/- tolerance). Typical potentiometer (and other basic electronic parts) values are defined in IEC 60063. There are few number series - E6 and E12 are very common. In E6 series there are 6 numbers: 10, 15, 22, 33, 47, 68 numbers. E12 series contains 12 numbers. For example - number 47 can be used for 4.7Ξ©, 47Ξ©, 470Ξ© values. If you need value very close to preferred value - you may have to choose from E96 series (there are 96 numbers), but potentiometers with these values are manufactured in low volumes and it may be hard or impossible to find them on the market. Closest value to 50Ξ© in E6 (and E12) series is 47Ξ© . You can read more about electronic parts values here . There are linear and logarithmic potentiometers. Linear potentiometers have resistance proportional to slider position (slider at 50% gives 50% resistance etc), logaritmic have logaritmic scale. Potentiometer used to limit current directly is bad idea for motor speed regulation, because typical potentiometers are not designed for high currents. Voltage drop on potentiometer at relatively high current will cause relatively high power loss. At 50% you will waste 50% of power on potentiometer and this power will produce a lot of heat. That heat will damage or completely destroy typical potentiometer. There are potentiometers for higher currents, but they are much more expensive. For motor speed regulation you should use PWM regulator or at least some regulator with transistor to "amplify current from potentiometer".
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/115216', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/18069/']}
jdg_373388
stackexchange
llm_judgeable_groundtruth_similarity
41074688
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can you write a python script to read Tensorboard log files, extracting the loss and accuracy and other numerical data, without launching the GUI tensorboard --logdir=... ? Now provide the response and nothing else.
You can use TensorBoard's Python classes or script to extract the data: How can I export data from TensorBoard? If you'd like to export data to visualize elsewhere (e.g. iPython Notebook), that's possible too. You can directly depend on the underlying classes that TensorBoard uses for loading data: python/summary/event_accumulator.py (for loading data from a single run) or python/summary/event_multiplexer.py (for loading data from multiple runs, and keeping it organized). These classes load groups of event files, discard data that was "orphaned" by TensorFlow crashes, and organize the data by tag. As another option, there is a script ( tensorboard/scripts/serialize_tensorboard.py ) which will load a logdir just like TensorBoard does, but write all of the data out to disk as json instead of starting a server. This script is setup to make "fake TensorBoard backends" for testing, so it is a bit rough around the edges. Using EventAccumulator : # In [1]: from tensorflow.python.summary import event_accumulator # deprecatedIn [1]: from tensorboard.backend.event_processing import event_accumulatorIn [2]: ea = event_accumulator.EventAccumulator('events.out.tfevents.x.ip-x-x-x-x', ...: size_guidance={ # see below regarding this argument ...: event_accumulator.COMPRESSED_HISTOGRAMS: 500, ...: event_accumulator.IMAGES: 4, ...: event_accumulator.AUDIO: 4, ...: event_accumulator.SCALARS: 0, ...: event_accumulator.HISTOGRAMS: 1, ...: })In [3]: ea.Reload() # loads events from fileOut[3]: <tensorflow.python.summary.event_accumulator.EventAccumulator at 0x7fdbe5ff59e8>In [4]: ea.Tags()Out[4]: {'audio': [], 'compressedHistograms': [], 'graph': True, 'histograms': [], 'images': [], 'run_metadata': [], 'scalars': ['Loss', 'Epsilon', 'Learning_rate']}In [5]: ea.Scalars('Loss')Out[5]: [ScalarEvent(wall_time=1481232633.080754, step=1, value=1.6365480422973633), ScalarEvent(wall_time=1481232633.2001867, step=2, value=1.2162202596664429), ScalarEvent(wall_time=1481232633.3877788, step=3, value=1.4660096168518066), ScalarEvent(wall_time=1481232633.5749283, step=4, value=1.2405034303665161), ScalarEvent(wall_time=1481232633.7419815, step=5, value=0.897326648235321), ...] size_guidance : size_guidance: Information on how much data the EventAccumulator should store in memory. The DEFAULT_SIZE_GUIDANCE tries not to store too much so as to avoid OOMing the client. The size_guidance should be a map from a `tagType` string to an integer representing the number of items to keep per tag for items of that `tagType`. If the size is 0, all events are stored.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/41074688', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3993741/']}
jdg_373389
stackexchange
llm_judgeable_groundtruth_similarity
164819
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to get my head wrapped around transformer operation and in the process regretting the times I snoozed in my Electromagnetics class as a EE student back when I was a lad :) I'm looking for an intuitive understanding, but not just an analogous one. I'd like it to be grounded in the actual physics of what's happening. I've found several excellent sources on the web, but they all seem to skirt this question. I've come across a few interesting hints and am now tantalizingly close,I think, but still yearning :) Fact 1: Although varying sinusoidally, the "peak-to-peak" flux, so to speak, in a transformer's core is essentially constant (for a given voltage applied to the primary), regardless of the load. My intuitive hypothesis was that variation in the "strength" of the flux was what transferred the energy from the primary to the secondary, but this fact would seem tocontradict that theory. I had thought that the primary makes a bunch of flux based on the current flowing through it and the secondary sucks it up to make current of its own. No dice, it seems. Then of course there's the fact that the formula for flux involves onlyvoltage, time (frequency), and turns :) Fact 2: The current in the primary is (approximately) 90 degrees out ofphase with the voltage at no load, and approximately aligned in phase at fullload. This fact seems very promising and also curiously satisfying. It would implythat the Volt-Amps (VA) of the primary is constant and only the power factorchanges as the current load on the secondary increases. But I still don't get how the energy is actually being transferred. It seemsvaguely like the flux is just there as an energy conductor or something andsome other phenomenon is actually doing the energy transfer bit. Can someone see what I'm missing and explain what's actually happening inthere? Now provide the response and nothing else.
The answer has been totally rewritten to fulfill the request of a mathematical, format treatment, as asked for by StainlessSteelRat. The bold-face part in the center of this answer is the actual answer to your question. The remaining part is to show how I get to that formula and to connect your two facts (which I consider true) to the answer. Take a look at the following formulas are important to understand the transformer: Faraday's law of induction (for a coil with \$n\$ turns): \$U=-n\frac{\mathrm d}{\mathrm dt}\Phi\$ . Hopkin's law (aka magnetic Ohms law): \$\Phi = \frac{nI}{R_m}\$ Faradays law describes how the rate of change of the magnetic flux \$\Phi\$ through a single turn of a coil is related to the voltage \$U\$ induced in that coil. Hopkin's law describes the connection between the current \$I\$ applied to a long coil with \$n\$ turns and the flux \$\Phi\$ caused by that current. The value \$R_m\$ (magnetic resistance) depends on the geometry of the core, so it is a constant value for a given transformer. If a sinusodial voltage \$U(t)=U_0\sin(\omega t)\$ is applied to a coil, induction of faradays law yields \$\Phi(t) = -\frac{U_0}{\omega n}\cos(\omega t)\$ (The integration constant has been chosen in a way that the average flux vanishes) In a transformer, there are two coils on a shared core. The resulting flux in the core \$\Phi_c\$ is the sum of the flux generated by both coils on the core: \$\Phi_c = \Phi_p + \Phi_s\$ The law of induction relates the derivative of total flux through a coil (whether it is caused by the current of that coil or is caused by external fields) to the induced voltage. So you can write faradays law for both the primary and the secondary winding as \$U_p = -n_p\frac{\mathrm d}{\mathrm dt}\Phi_c\$ ; \$U_s = -n_s\frac{\mathrm d}{\mathrm dt}\Phi_c\$ . Note that while the voltages on the primary and secondary side (as well as the turn counts) are different, the shared total flux \$\Phi_c\$ appears in both equations. It can be eliminated yielding the well-known transformer equation \$U_s = -\frac{n_s}{n_p}U_p\$ More step-by-step: By integration the \$U_p\$ equation (as already shown for sinusodial voltage above), the flux \$\Phi_c\$ caused by a voltage source on the primary side can be calculated. By differentiating \$\Phi_c\$ again in the \$U_s\$ equation, the secondary voltage caused by the flux can be calculated. Note that up to now, the current did not really take place in the discussion of the transformer. The equation (and thus the integrated \$\Phi_c\$ ) is valid for all load conditions of an ideal transformer driven by an ideal voltage source. (This is your "Fact 1") To add the effects of the load, consider the two parts that make up the resulting flux: \$\Phi_c = \frac{n_p I_p}{R_m} + \frac{n_s I_s}{R_m}\$ As we assume the primary side is an ideal voltage source, there is nothing known about the the current \$I_p\$ , as the source would deliver every current needed so that the primary voltage is as required. If we assume a simple ohmic load, the load current is known, though, it is \$I_l = U_s/R_s\$ , with \$R_s\$ being the resistance of the load. At this point, signs get very important. If I call one terminal of the secondary "ground" and the other one "live", the voltage between the "live" and the "ground" terminal has a clearly defined sign. On the other hand, the current has to flow from "ground" to "live" inside the transformer as it flows from "live" to "ground" in the load and the other way around. So if secondary voltage and load voltage are defined as the voltage from the "live" to the "ground" terminal, they are obviously equal-signed, whereas the currents, measured as "current flowing into the respective live terminal" are oppositely signed. This means \$I_s = -I_l\$ , so the formula for the flux can be written as \$R_m\Phi_c = n_p I_p - n_s I_l = n_p I_p - n_s \frac{U_s}{R_s} = n_p I_p - \frac{n_s^2}{n_p}\frac{U_p}{R_s}\$ The first term of the right hand side shows the flux "created" by the primary winding, and the second term the flux "consumed" by the secondary side. The part of the flux consumed by the secondary side corresponds to the energy transmitted from the primary to the secondary. Solve that equation for \$I_p\$ to obtain: \$I_p = \frac{R_m\Phi_c}{n_p} + \frac{n_s^2}{n_p^2}\frac{U_p}{R_s}\$ substituting a sinusodial voltage for \$U_p\$ and using the integrated term for \$\Phi_c\$ , one obtains: \$I_p(t) = -\frac{R_m}{n_p^2 \omega}U_0\cos(\omega t) + \frac{n_s^2}{n_p^2}\frac{U_0}{R_s}\sin(\omega t)\$ The first term on the right-hand side is out-of-phase with \$U(t)=U_0\sin(\omega t)\$ , so it describes reactive power and is load-independent, while the second term is in-phase, so it describes active power and is load dependent. If the transformer is unloaded, the second term vanishes and the first reactive term remains, causing the 90Β° phase shift, while on a highly loaded transformer, the second term dominates so that the phase shift gets very close to zero. This is your "Fact 2".
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/164819', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/70923/']}
jdg_373390
stackexchange
llm_judgeable_groundtruth_similarity
12271224
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using vim to do a search and replace with this command: %s/lambda\s*{\([\n\s\S]\)*//gc I'm trying to match for all word, endline and whitespace characters after a { . For instance, the entirety of this line should match: lambda { FactoryGirl.create ... Instead, it only matches up to the newline and no spaces before FactoryGirl . I've tried manually replacing all the spaces before, just in case there were tab characters instead, but no dice. Can anyone explain why this doesn't work? Now provide the response and nothing else.
The \s is an atom for whitespace; \n , though it looks similar, syntactically is an escape sequence for a newline character. Inside the collection atom [...] , you cannot include other atoms, only characters (including some special ones like \n . From :help /[] : The following translations are accepted when the 'l' flag is not included in 'cpoptions' {not in Vi}: \e <Esc>\t <Tab>\r <CR> (NOT end-of-line!)\b <BS>\n line break, see above |/[\n]|\d123 decimal number of character\o40 octal number of character up to 0377\x20 hexadecimal number of character up to 0xff\u20AC hex. number of multibyte character up to 0xffff\U1234 hex. number of multibyte character up to 0xffffffff NOTE: The other backslash codes mentioned above do not work inside []! So, either specify the whitespace characters literally [ \t\n...] , use the corresponding character class expression [[:space:]...] , or combine the atom with the collection via logical or \%(\s\|[...]\) .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12271224', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/640517/']}
jdg_373391
stackexchange
llm_judgeable_groundtruth_similarity
34155147
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is probably an easy and dumb question. I create a task like this: Task<bool> myTask = new Task<bool>(() => { Debug.WriteLine("Task fired"); return true; });// I know I can create it with Task.Run, but this is for purpose of the samplemyTask.Start(); and I've few questions about this: Does it always run on ThreadPool thread? If it runs on ThreadPool is there a chance that it will be run by UI thread? And in case of havey job - can block it? In case of large number of tasks, can few be assigned to one thread (queued) and then run one after another? Or every task has its own thread? I've read some documentation, but I failed to find concrete explanation. For example Task documentation says generally about Tasks: Because the work performed by a Task object typically executes asynchronously on a thread pool thread rather than synchronously on the main application thread... Now provide the response and nothing else.
does it always run on ThreadPool thread? Not necessarily. If you take a look at the Task constructor overload taking a TaskCreationOptions , you can pass the value TaskCreationOptions.LongRunning . If internally it uses TaskScheduler.Default , this will create a new thread which isn't one of the threadpool's. Generally, it is recommended that you use Task.Run for queuing threads on the thread-pool. If you want to pass a custom TaskScheduler , you can use the more advanced Task.Factory.StartNew , but I'd recommend only using it when really needed. if it runs on ThreadPool is there a chance that it will be run by UI thread? And in case of havey job - can block it? No. The UI thread isn't part of the pool used by the ThreadPool. in case of large number of tasks, can few be assigned to one thread (queued) and then run one after another? Or every task has its own thread? That is implementation specific to the TaskScheduler being used. If we look at the ThreadPoolTaskScheduler (which is the default in case no custom one is passed) the threadpool starts with a constant amount of threads, and scales as it needs. It is not guaranteed that each delegate will execute on a different thread. You can, however, create a custom TaskScheduler , where you control the mechanism of scheduling tasks for exectuion.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34155147', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2681948/']}
jdg_373392
stackexchange
llm_judgeable_groundtruth_similarity
33233902
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to Android development and I've got big problems with creating custom toolbar. My requirements: Custom button on the left (icon + text) Divider after custom button Buttons height should be the same as toolbar (no margins) Here is sample image which explains my requirements: I was trying to use actionBar.setCustomView(v); but it didn't solve my problems: Right buttons have top/bottom margin - they are smaller than toolbar I was not able to add the divider. Left button (from custom view) was smaller than toolbar height. My questions: Do I really need custom view to add custom button on the left? How to add divider on the left? How to make buttons height same as toolbar height? Now provide the response and nothing else.
The Toolbar is basically a FrameLayout so you can add inside the layout-tag whatever you want. In your case something like the following seems sufficient: layout.xml <android.support.v7.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="?actionBarSize" android:background="?colorPrimary" app:contentInsetLeft="0dp" app:contentInsetStart="0dp" app:popupTheme="@style/ThemeOverlay.AppCompat.Light"> <LinearLayout android:layout_width="wrap_content" android:layout_height="?attr/actionBarSize" android:divider="@drawable/divider" android:dividerPadding="8dp" android:orientation="horizontal" android:showDividers="end"> <TextView android:id="@+id/toolbar_save" style="@style/TextAppearance.Widget.AppCompat.Toolbar.Subtitle" android:layout_width="match_parent" android:layout_height="match_parent" android:background="?attr/selectableItemBackground" android:drawableLeft="@drawable/ic_action_check" android:drawablePadding="8dp" android:gravity="center_vertical" android:paddingLeft="16dp" android:paddingRight="16dp" android:text="Save" android:textAllCaps="true" /> </LinearLayout></android.support.v7.widget.Toolbar> divider.xml Add this to your /res/drawable folder. This is used as the LinearLayout divider in the code above. <?xml version="1.0" encoding="utf-8"?><shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle"> <size android:width="1dp" /> <solid android:color="@android:color/white" /></shape> Code private void setupToolbar() { Toolbar mToolbar = (Toolbar) findViewById(R.id.toolbar); setSupportActionBar(mToolbar); // Hide the title getSupportActionBar().setTitle(null); // Set onClickListener to customView TextView tvSave = (TextView) findViewById(R.id.toolbar_save); tvSave.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // TODO } });} In terms of the items on the right side: Simply use the default onCreateOptionsMenu method and inflate the corresponding R.menu.* resource. Result
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33233902', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/716027/']}
jdg_373393
stackexchange
llm_judgeable_groundtruth_similarity
21244052
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I haven't been able to get a local sound playing using the Cordova Media plugin. I have the right options in my platforms\android\res\xml\config.xml here... <content src="index.html" /><access origin="*" /><preference name="loglevel" value="DEBUG" /><feature name="App"> <param name="android-package" value="org.apache.cordova.App" /></feature><feature name="Device"> <param name="android-package" value="org.apache.cordova.device.Device" /></feature><feature name="File"> <param name="android-package" value="org.apache.cordova.file.FileUtils" /></feature><feature name="Media"> <param name="android-package" value="org.apache.cordova.media.AudioHandler" /></feature><feature name="SplashScreen"> <param name="android-package" value="org.apache.cordova.splashscreen.SplashScreen" /></feature> ...and the permission option in my AndroidManifest.xml ... <uses-sdk android:minSdkVersion="10" android:targetSdkVersion="19" /><uses-permission android:name="android.permission.RECORD_AUDIO" /><uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /><uses-permission android:name="android.permission.READ_PHONE_STATE" /> However when I try to play a sound I get nothing: Note: I'm using Apache Cordova 3.3.0 Now provide the response and nothing else.
Managed to figure it out in the end you have to reference the file paths relative to the window.location.pathname as when run through Apache Cordova the root is file:///android_asset/www/ so the sounds must be referenced like so /android_asset/www/path/to/a/sound.mp3 . Here's my full solution: var audio = { root: '', localPath: '/assets/sounds/', manifest: { bang: 'bang.mp3', boom: 'boom.mp3', splat: 'splat.mp3', splash: 'splash.mp3' }, sounds: {}, parse: function () { var p = window.location.pathname; this.root = p.substring(0, p.lastIndexOf('/')) + this.localPath; console.log(this.root); for (var sound in this.manifest) { var src = this.root + this.manifest[sound]; console.log('Getting sound: ' + src); this.sounds[sound] = (typeof Media !== 'undefined') ? new Media(src) : new Audio(src); console.log('Created sound:'); $('#sounds').append($('<li></li>').html(src)); } }};audio.parse(); So for instance my index.html is in my root which is essentially ./ from my perspective (the www folder). All my sounds are within two sub folders ./assets/sounds/ . When hosting the application on my PC window.location.pathname appears as / : However when hosted on my mobile device it is:
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21244052', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2481696/']}
jdg_373394
stackexchange
llm_judgeable_groundtruth_similarity
25603390
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I got a small problem, let me start with the code class ViewController: UIViewController {@IBOutlet weak var LBoutput: UILabel!@IBAction func BTclick(sender: AnyObject) { var url = NSURL(string: "http://google.com") println("test0") let getdata = NSURLSession.sharedSession().dataTaskWithURL(url){(data ,response , error) in var htmlContent = NSString(data: data, encoding: NSUTF8StringEncoding) println("test1") println("test2") self.LBoutput.text = "test6" } println("test3") getdata.resume() println("test4") LBoutput.text = "test5"} This codes give me a output in the console of test0 test3 test4 test1 test2 Also the label updates to "test5" with no time, but the label to update to "test6" takes like 30sec. I have no clue why this takes so long, anyone a idea? Now provide the response and nothing else.
Usual problem of updating UI in a secondary thread: Your closure is obviously not running on the main thread, as the URL task is asynchronous. So updating the label on the closure will have to wait for the main thread to run its update loop. The way to fix it is to wrap the .text = call to force it to run on the main thread (which is where the UI stuff should run anyway): dispatch_async(dispatch_get_main_queue()) { self.LBoutput.text = "test6" }
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25603390', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/997120/']}
jdg_373395
stackexchange
llm_judgeable_groundtruth_similarity
323736
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When you do numerically experiments with the following expression it becomes pretty clear that the limit should be the max function $\max(0,x)$: $$\lim_{y\to \infty}\frac{\ln(e^{x y}+1)}y$$ How can you prove that? Now provide the response and nothing else.
If $x>0$, then $\lim\limits_{y\to \infty}\dfrac{\ln(e^{xy}+1)}{y} = \lim\limits_{y\to \infty}\dfrac{\ln(e^{xy})}{y} = x$. If $x<0$, then $\lim\limits_{y\to \infty}\dfrac{\ln(e^{xy}+1)}{y} = \lim\limits_{y\to \infty}\dfrac{\ln(0+1)}{y} = 0$. If $x=0$, then $\lim\limits_{y\to \infty}\dfrac{\ln(e^{xy}+1)}{y} = \lim\limits_{y\to \infty}\dfrac{\ln(1+1)}{y} = 0$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/323736', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/346/']}
jdg_373396
stackexchange
llm_judgeable_groundtruth_similarity
571035
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've looked at an example in my textbook, it is: $2^{x}=129$ $\ln \left( 2^{x}\right) =\ln \left( 129\right) $ $x\ln \left( 2\right) =\ln \left( 129\right) $ $ x=\dfrac {\ln \left( 129\right) }{\ln \left( 2\right) }$ My question is how is it that you can take logs to the base e and still obtain the right $x$? Shouldn't you have to take logs to base 2 to find the exponent $x$ that goes on 2 to give 129? Why is it that I can use the natural logarithm to find x in this instance? Also I've tried for other bases such as 3 as well and I get the right answer, why is this? Furthermore when we usually solve an exponential function such as the above we would do: $2^{x}=129$ $\log _{2}\left( 129\right) =x$ But how is it in this example they take logs of both sides? I'm sorry if this doesn't make sense, I'm just very confused so take it easy on me, I'm studying at a pre-calculus mathematics level. Thank you for your help! EDIT: Okay I've opened a bounty on this question, partly because although i've received a lot of responses I still don't seem to understand this and hopefully someone else will come along with a fresh perspective or perhaps build on what others have wrote beforehand in a way that's conducive to my understanding. I hope this does not offend any of the people who have answered beforehand, its not your fault i cannot understand. That said, what I would like to understand is the following: (1) Why is it that if I have an equation $2^x=8$, that taking logs to any base b (where b>0) would always give me the same answer for x (i.e. 3)?: $$\eqalign{ & {\log _2}({2^x}) = {\log _2}(8) \cr & {\log _3}({2^x}) = {\log _3}(8) \cr & {\log _4}({2^x}) = {\log _4}(8) \cr} $$ How is it they all give the value of $x=3$? Shouldn't taking the equation $2^x=8$ to the base of say ${\log _2}$ give me a differing value to an equation taken to ${\log _4}$? So that leads me onto my next question: (2) What property of logarithms does this? WHY is this so? I think from what I've gathered from here is it has to do with how one logarithm is proportional or scalable to another? But I don't understand how this applies, if someone could explain this I think i'd be able to understand. I'd like any answers if possible to refer to the example I've already used, if possible. Thank you. Now provide the response and nothing else.
Note that if you solve $e^y=2$ you get $y=\ln(2)$. This tells us that $$2=e^{\ln(2)}$$ which you probably already know. Now we can solve this equation two ways: Method 1 : as you solved it. $$2^x=129 \Rightarrow x =\log_2(129) \,.$$ Method 2 : Well, since $2=e^{\ln(2)}$ you can rewrite the equation as $$(e^{\ln(2)})^x= 129 \,. $$ This is $$e^{(x \ln (2))} =129 \,.$$ Since this is an exponential with base $e$, it suddenly makes sense to take the natural logarithm. Then you get $$x \ln(2) = \ln(129) \,.$$ Intuitively, this is the reason why you can take a different logarithm than the obvious one, in a hidden way you use that any number $a$ can be rewritten as $b^{\log_b(a)}$ and this way any exponential with basis $a$ becomes an exponential with basis $b$. This process is called " the change of base for logarithm formula", and it can be done much faster by the simple formula $$\log_a(b)=\frac{\log_c(b)}{\log_c(a)} \,.$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/571035', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/45635/']}
jdg_373397
stackexchange
llm_judgeable_groundtruth_similarity
10009427
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a function that return [[]] , and I want to test the result as unit test.But I found that the expression [[]] == [[]] return false .Here a simple test code: # [[]] == [[]];;- : bool = false Can someone explain me why this expression is evaluated as false? Thanks. Now provide the response and nothing else.
Use = since you have structural equality for comparing two values: # [[]] = [[]];;- : bool = true Because == is reference equality , it only returns true if you refer to the same memory location: let a = [[]]let b = a# b == a;;- : bool = true
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10009427', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1281725/']}
jdg_373398
stackexchange
llm_judgeable_groundtruth_similarity
25588388
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a page. Vertically, it has two parts. The top is a navbar. The lower section is the content of the page. The lower section has the bootstrap "container" class. Within the container, the left-side is a navigation area. The right-side is the content, which has another navbar. Here is what I have for the navbar. <div class="navbar navbar-default" role="navigation"> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> </div> <div class="navbar-collapse collapse" id="control-panel"> <ul class="nav navbar-nav"> <li class="dropdown"> <a href="#" >Menu 1</a> </li> <li class="dropdown"> <a href="#" >Menu 2</a> </li> <li class="dropdown"> <a href="#" >Menu 3</a> </li> </ul> </div><!--/.nav-collapse --></div> <!-- end of navbar --> Note that I removed the immediate "container" child of "navbar" found in normal navbar examples, as in this link: http://getbootstrap.com/examples/navbar/ . With this container, the navbar does not work well with dropdowns. So far the navbar works, but I would like to increase its breakpoint for collapse. I found a few related posts at SO such as Bootstrap 3 Navbar Collapse but still have no idea about how to make it work in my case. I have this within-page navbar only on this page, not site-wide. Now provide the response and nothing else.
Consider this: Try(1) match { case Success(i) => i case Failure(t) => 0 // t is the `Throwable`} This works because Success and Failure are sub classes of the abstract class Try . However, the following code fails to compile, because you're no longer matching on a generic Try , and instead a Failure which can never be an instance of Success . Failure(new Exception("a")) match { case Success(i) => "a" // You can see it compiles if you remove this line. case Failure(e) => "b" case _ => "c"} This is like trying to match an Integer to a String , it doesn't really make sense. If you want to get the Throwable via pattern matching, see the first snippet of code. Another way you could extract the Throwable would be to use the failed method on your Try , which will wrap the Throwable from a failure within Success . scala> val t: Throwable = Try(throw new Exception).failed.gett: Throwable = java.lang.Exception Calling this on a Success , however, will throw another exception.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25588388', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/997474/']}
jdg_373399
stackexchange
llm_judgeable_groundtruth_similarity
594901
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In the Feynman Lectures, he lays out how in special relativity, the mass is adjusted by a factor of $\sqrt{1 - v^2 / c^2}$ and then writes: For those who want to learn just enough about it so they can solve problems, that is all there is to the theory of relativity – it just changes Newton’s law by introducing a correction factor to the mass Does Feynman mean that you can derive the other phenomena described in special relativity (time dilation, length contraction) from only this correction to the mass? If so, how? If not, what other postulates are necessary? Now provide the response and nothing else.
The claim is wrong because Newton's second law does not become $F=m \gamma a$ .
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/594901', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/28496/']}
jdg_373400
stackexchange
llm_judgeable_groundtruth_similarity
5851497
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In an abstract base class if we have some static fields then what happens to them ? Is their scope the classes which inherit from this base class or just the type from which it is inheriting (each subclass has it's own copy of the static field from the abstract base class)? Now provide the response and nothing else.
static members are entirely specific to the declaring class; subclasses do not get separate copies. The only exception here is generics; if an open generic type declares static fields, the field is specific to that exact combination of type arguments that make up the closed generic type; i.e. Foo<int> would have separate static fields to Foo<string> , assuming the fields are defined on Foo<T> .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5851497', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/313421/']}
jdg_373401
stackexchange
llm_judgeable_groundtruth_similarity
81365
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The resolution of a window is typically defined by the width of its spectral mainlobe (3dB-width or null-to-null width). It is often stated that the rectangular window has the best resolution among all windows. In my work, I'd be looking for a reference where it is rigorously proved that there exists no window that has a better resolution. Does anyone know where I could find something like this (at least I could not find anything yet)? Thanks for your help. Now provide the response and nothing else.
Wikipedia has a very nice article on window functions https://en.wikipedia.org/wiki/Window_function The basic trade-offs in windows is main lobe width vs side lobe height and overall attenuation. The rectangular window is at one end of this spectrum: it is the "widest in time", i.e. has the most energy at the window edges (compared to the other "normal" windows. However, it has also very poor side lobe behavior. rigorously prove that there exists no window that has a better resolution. You can't prove this, since it's not true. Not unless you put some constraints on what exactly qualifies as a window. Per my argument above, the width of the main lobe is determined by how much energy is at the edges. If that's what you want, you don't need to stop at a rectangular window: you can crank it up some more. Raise the edges and reduce the middle. For example you can use window that is $w[n] = 1-hanning[n]$ . If you do the spectral analysis you see that this is actually narrower than a rectangular window. But, off course, the side lobes are terrible.
{}
{'log_upvote_score': 5, 'links': ['https://dsp.stackexchange.com/questions/81365', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/59982/']}
jdg_373402
stackexchange
llm_judgeable_groundtruth_similarity
185899
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Recently I was asked this during a job interview. I was honest and said I knew how a symbolic link behaves and how to create one, but do not understand the use of a hard link and how it differs from a symbolic one. Now provide the response and nothing else.
Underneath the file system, files are represented by inodes. (Or is it multiple inodes? Not sure.) A file in the file system is basically a link to an inode. A hard link, then, just creates another file with a link to the same underlying inode. When you delete a file, it removes one link to the underlying inode. The inode is only deleted (or deletable/over-writable) when all links to the inode have been deleted. A symbolic link is a link to another name in the file system. Once a hard link has been made the link is to the inode. Deleting, renaming, or moving the original file will not affect the hard link as it links to the underlying inode. Any changes to the data on the inode is reflected in all files that refer to that inode. Note: Hard links are only valid within the same File System. Symbolic links can span file systems as they are simply the name of another file.
{}
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/185899', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4960/']}
jdg_373403
stackexchange
llm_judgeable_groundtruth_similarity
43064107
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on an offline network and want to install angular-cli using npm.I have a zip file of angular-cli and using the latest node and npm version.I'm using the command: npm install ./angular-cli-master to install angular-cli from the folder.But I keep getting this error telling me I don't have an internet connection (which is ok).So how can I install this angular-cli while offline using the zip I downloaded from Github? Thanks for your help. Now provide the response and nothing else.
You simply copy the package and all dependencies in your node_modules folder, inside the project for local installation, or in the global folder ( npm config get prefix to see where it is located) for a global installation. The behavior of npm install is to check for the dependencies, and install them first. When it doesn't find them installed, nor the local file containing them, it tries to download them. Since all of those steps fail (you don't have the dependency installed, it isn't available on the expected location, and it can't download it), the installation fails. You can find the dependency list in the package.json of each module, but since it is recursive, it can take a long time to have everything set right if you do it manually, npm does it by recursion. For you, the easiest way would be to create a new folder on the connected PC, and inside it npm install angular-cli , zip the folder and transfer it on the offline machine.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43064107', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6396974/']}
jdg_373404
stackexchange
llm_judgeable_groundtruth_similarity
3195330
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have an array created like this: MyType *array = new MyType[10]; And I want to overwrite one of the elements, do I have to delete first the old element like this: delete &array[5];array[5] = *(new MyType()); Or is this completely wrong and do I have to work with something like "pointers to pointers" to fix this job? If so, how please.... Thanks Now provide the response and nothing else.
It's an array of values , not of pointers. So you'd just do array[5] = MyType(); This requires MyType to support the assignment operator. Incidentally, there's rarely a need for manual array allocation like this in C++. Do away with the new and delete and use std::vector instead: std::vector<MyType> array(10);array[5] = MyType(); Note, there's no need to delete anything.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3195330', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/155137/']}
jdg_373405
stackexchange
llm_judgeable_groundtruth_similarity
35575674
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the below list of images present on my system and want to copy all these images to a remote machine. REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZEu14_py269 latest 6a1ec0b508b3 4 days ago 885.9 MBu12_py273 latest c2a804894851 4 days ago 686 MBu12_core latest 0d61eba80df2 4 days ago 629.1 MBc6_py266 latest cb1a94742d59 4 days ago 1.32 GBc6_core latest 77c2ed19d87f 4 days ago 1.278 GBc7_py275 latest bb1d3de68452 4 days ago 1.117 GBc7_core latest ca14a76e9cca 4 days ago 1.081 GBu14_py35 latest d110c7e4a1f5 5 days ago 914.5 MBu14_py34 latest 085a37cb8614 5 days ago 830.7 MBu14_py276 latest 8927c6167930 5 days ago 834.1 MBu14_core latest 93ead5abc25b 5 days ago 776.9 MBcentos centos6 36877b5acebb 5 days ago 228.9 MBubuntu latest 36248ae4a9ac 5 days ago 188 MBubuntu 12.04 94a7cb19a65b 5 days ago 137.8 MBedgester/gerrit latest ce4e3238052a 6 days ago 735.2 MBu14_as374_py276 latest fa5fb7189d70 11 days ago 1.497 GBc721_as373_py275 latest 03ccf6961d0c 11 days ago 844.3 MBc721_as373_py35 latest b5fece3dd45b 11 days ago 1.127 GBc171_con_core latest 8af0d24a38a0 2 weeks ago 377.2 MBu14_as374_php55 latest 29df638e363a 3 weeks ago 1.073 GBj_u14_as374_php55 latest 29df638e363a 3 weeks ago 1.073 GBcentos centos7 c8a648134623 8 weeks ago 196.6 MBcentos latest c8a648134623 8 weeks ago 196.6 MBj_u14_as374_py276 latest 28f379d60882 10 weeks ago 871.5 MBubuntu 14.04 89d5d8e8bafb 10 weeks ago 187.9 MB Currently I am using method suggested in save and load Docker images , but I believe there must be a better way to deal with all images. Now provide the response and nothing else.
If you want to export all images at once, create one big tar file: docker save $(docker images -q) -o /path/to/save/mydockersimages.tar If you want to save multiples images in one .tar file: IDS=$(docker images | awk '{if ($1 ~ /^(debian|centos)/) print $3}')docker save $IDS -o /path/to/save/somedockersimages.tar Finally, if you want to export multiple many images, with one .tar file per images (not disk efficient: common layer are saved in each .tar file): docker images | awk '{if ($1 ~ /^(openshift|centos)/) print $1 " " $2 " " $3 }' | tr -c "a-z A-Z0-9_.\n-" "%" | while read REPOSITORY TAG IMAGE_IDdo echo "== Saving $REPOSITORY $TAG $IMAGE_ID ==" docker save -o /path/to/save/$REPOSITORY-$TAG-$IMAGE_ID.tar $IMAGE_IDdone You may also want to save the list of images so that the restored images can be tagged: docker images | sed '1d' | awk '{print $1 " " $2 " " $3}' > mydockersimages.list On the remote machine , you can load (import) the images: docker load -i /path/to/save/mydockersimages.tar and tag the imported images: while read REPOSITORY TAG IMAGE_IDdo echo "== Tagging $REPOSITORY $TAG $IMAGE_ID ==" docker tag "$IMAGE_ID" "$REPOSITORY:$TAG"done < mydockersimages.list For more information about save/load, read: How to copy Docker images from one host to another without using a repository
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/35575674', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2710873/']}
jdg_373406
stackexchange
llm_judgeable_groundtruth_similarity
42842511
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Let's consider a common-known ASP.NET Core scenario. Firstly we add the middleware: public void Configure(IApplicationBuilder app){ app.UseCookieAuthentication(new CookieAuthenticationOptions() { AuthenticationScheme = "MyCookie", CookieName = "MyCookie", LoginPath = new PathString("/Home/Login/"), AccessDeniedPath = new PathString("/Home/AccessDenied/"), AutomaticAuthenticate = true, AutomaticChallenge = true }); //...} Then serialize a principal: await HttpContext.Authentication.SignInAsync("MyCookie", principal); After these two calls an encrypted cookie will be stored at the client side. You can see the cookie (in my case it was chunked) in any browser devtools: It's not a problem (and not a question) to work with cookies from application code. My question is: how to decrypt the cookie outside the application ? I guess a private key is needed for that, how to get it? I checked the docs and found only common words: This will create an encrypted cookie and add it to the currentresponse. The AuthenticationScheme specified during configuration mustalso be used when calling SignInAsync. Under the covers the encryption used is ASP.NET's Data Protectionsystem. If you are hosting on multiple machines, load balancing orusing a web farm then you will need to configure data protection touse the same key ring and application identifier. So, is it possible to decrypt the authentication cookie, and if so how? UPDATE #1: Based on Ron C great answer and comments , I've ended up with code: public class Startup{ //constructor is omitted... public void ConfigureServices(IServiceCollection services) { services.AddDataProtection().PersistKeysToFileSystem( new DirectoryInfo(@"C:\temp-keys\")); services.AddMvc(); } public void Configure(IApplicationBuilder app) { app.UseCookieAuthentication(new CookieAuthenticationOptions() { AuthenticationScheme = "MyCookie", CookieName = "MyCookie", LoginPath = new PathString("/Home/Index/"), AccessDeniedPath = new PathString("/Home/AccessDenied/"), AutomaticAuthenticate = true, AutomaticChallenge = true }); app.UseStaticFiles(); app.UseMvcWithDefaultRoute(); }}public class HomeController : Controller{ public async Task<IActionResult> Index() { await HttpContext.Authentication.SignInAsync("MyCookie", new ClaimsPrincipal()); return View(); } public IActionResult DecryptCookie() { var provider = DataProtectionProvider.Create(new DirectoryInfo(@"C:\temp-keys\")); string cookieValue = HttpContext.Request.Cookies["MyCookie"]; var dataProtector = provider.CreateProtector( typeof(CookieAuthenticationMiddleware).FullName, "MyCookie", "v2"); UTF8Encoding specialUtf8Encoding = new UTF8Encoding(false, true); byte[] protectedBytes = Base64UrlTextEncoder.Decode(cookieValue); byte[] plainBytes = dataProtector.Unprotect(protectedBytes); string plainText = specialUtf8Encoding.GetString(plainBytes); return Content(plainText); }} Unfortunately this code always produces exception on Unprotect method call: CryptographicException in Microsoft.AspNetCore.DataProtection.dll:Additional information: The payload was invalid. I tested different variations of this code on several machines without positive result. Probably I made a mistake, but where? UPDATE #2: My mistake was the DataProtectionProvider hasn't been set in UseCookieAuthentication . Thanks to @RonC again. Now provide the response and nothing else.
Decrypting the Authentication Cookie without needing the keys It's worth noting that you don't need to gain access to the keys to decrypt the authentication cookie. You simply need to use the right IDataProtector created with the right purpose parameter, and subpurpose parameters. Based on the CookieAuthenticationMiddleware source code https://github.com/aspnet/Security/blob/rel/1.1.1/src/Microsoft.AspNetCore.Authentication.Cookies/CookieAuthenticationMiddleware.cs#L4 it looks like the purpose you need to pass is typeof(CookieAuthenticationMiddleware) . And since they are passing additional parameters to the IDataProtector you will need to match them. So this line of code should get you an IDataProtector that can be used to decrypt the authentication cookie: var dataProtector = provider.CreateProtector(typeof(CookieAuthenticationMiddleware).FullName, Options.AuthenticationScheme, "v2"); Note that Options.AuthenticationScheme is just "MyCookie" in this case since that's what it was set to in the Configure method of the startup.cs file. Here is an example action method for decrypting your authentication cookie two different ways: public IActionResult DecryptCookie() { //Get the encrypted cookie value string cookieValue = HttpContext.Request.Cookies["MyCookie"]; //Get a data protector to use with either approach var dataProtector = provider.CreateProtector(typeof(CookieAuthenticationMiddleware).FullName, "MyCookie", "v2"); //Get the decrypted cookie as plain text UTF8Encoding specialUtf8Encoding = new UTF8Encoding(encoderShouldEmitUTF8Identifier: false, throwOnInvalidBytes: true); byte[] protectedBytes = Base64UrlTextEncoder.Decode(cookieValue); byte[] plainBytes = dataProtector.Unprotect(protectedBytes); string plainText = specialUtf8Encoding.GetString(plainBytes); //Get the decrypted cookie as a Authentication Ticket TicketDataFormat ticketDataFormat = new TicketDataFormat(dataProtector); AuthenticationTicket ticket = ticketDataFormat.Unprotect(cookieValue); return View();} This method uses an IDataProtectionProvider called provider that is constructor injected. Decrypting the Authentication Cookie when persisting keys to a directory If you want to share cookies between applications then you might decide to persist the data protection keys to a directory. This can be done by adding the following to the ConfigureServices method of the startup.cs file: services.AddDataProtection().PersistKeysToFileSystem( new DirectoryInfo(@"C:\temp-keys\")); BE CAREFUL though because the keys are not encrypted so it's up to you to protect them!!! Only persist the keys to a directory if you absolutely must, (or if you are just trying to understand how the system works). You will also need to specify a cookie DataProtectionProvider that uses those keys. This can be done with the help of the UseCookieAuthentication configuration in the Configure method of the startup.cs class like so: app.UseCookieAuthentication(new CookieAuthenticationOptions() { DataProtectionProvider = DataProtectionProvider.Create(new DirectoryInfo(@"C:\temp-keys\")), AuthenticationScheme = "MyCookie", CookieName = "MyCookie", LoginPath = new PathString("/Home/Login"), AccessDeniedPath = new PathString("/Home/AccessDenied"), AutomaticAuthenticate = true, AutomaticChallenge = true }); With that configuration done. You can now decrypt the authentication cookie with the following code: public IActionResult DecryptCookie() { ViewData["Message"] = "This is the decrypt page"; var user = HttpContext.User; //User will be set to the ClaimsPrincipal //Get the encrypted cookie value string cookieValue = HttpContext.Request.Cookies["MyCookie"]; var provider = DataProtectionProvider.Create(new DirectoryInfo(@"C:\temp-keys\")); //Get a data protector to use with either approach var dataProtector = provider.CreateProtector(typeof(CookieAuthenticationMiddleware).FullName, "MyCookie", "v2"); //Get the decrypted cookie as plain text UTF8Encoding specialUtf8Encoding = new UTF8Encoding(encoderShouldEmitUTF8Identifier: false, throwOnInvalidBytes: true); byte[] protectedBytes = Base64UrlTextEncoder.Decode(cookieValue); byte[] plainBytes = dataProtector.Unprotect(protectedBytes); string plainText = specialUtf8Encoding.GetString(plainBytes); //Get teh decrypted cookies as a Authentication Ticket TicketDataFormat ticketDataFormat = new TicketDataFormat(dataProtector); AuthenticationTicket ticket = ticketDataFormat.Unprotect(cookieValue); return View(); } You can learn more about this latter scenario here: https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/compatibility/cookie-sharing
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42842511', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5112433/']}
jdg_373407
stackexchange
llm_judgeable_groundtruth_similarity
252597
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my Windows XP Task Manager, some processes display a higher value in the Mem Usage column than the VMSize. My Firefox instance, for example shows 111544 K as mem usage and 100576 K as VMSize. According to the help file of Task Manager Mem Usage is the working set of the process and VMSize is the committed memory in the Virtual address space. My question is, if the number of committed pages for a process is A and the number of pages in physical memory for the same process is B, shouldn't it always be B ≀ A? Isn't the number of pages in physical memory per process a subset of the committed pages? Or is this something to do with sharing of memory among processes? Please explain. (Perhaps my definition of 'Working Set' is off the mark). Thanks. Now provide the response and nothing else.
Virtual Memory Assume that your program (eg Oracle) allocated 100 MB of memory upon startup - your VM size goes up by 100 MB though no additional physical / disk pages are touched. ie VM is nothing but memory book keeping. The total available physical memory + paging file memory is the maximum memory that ALL the processes in the system can allocate. The system does this so that it can ensure that at any point time if the processes actually start consuming all that memory it allocated the OS can supply the actual physical pages required. Private Memory If the program copies 10 MB of data into that 100 MB, OS senses that no pages have been allocated to the process corresponding to those addresses and assigns 10 MB worth of physical pages into your process's private memory. (This process is called page fault) Working Set Definition : Working set is the set of memory pages that have been recently touched by a program. At this point these 10 pages are added to the working set of the process. If the process then goes and copies this data into another 10 MB cache previously allocated, everything else remains the same but the Working Set goes up again by 10 Mb if those old pages where not in the working set. But if those pages where already in the working set, then everything is good and the programs working set remains the same. Working Set behaviour Imagine your process never touches the first 10 pages ever again, in which case these pages are trimmed off from your process's working set and possibly sent to the page file so that the OS can bring in other pages that are more frequently used. However if there are no urgent low memory requirements, then this act of paging need not be done and OS can act as if its rich in memory. In this case the working set simply lets these pages remain. When is Working Set > Virtual Memory Now imagine the same program de-allocates all the 100 Mb of memory. The programs VM size is immediately reduced by 100 MB (remember VM = book keeping of all memory allocation requests) The working set need not be affected by this, since that doesn't change the fact that those 10 Mb worth of pages where recently touched. Therefore those pages still remain in the working set of the process though the OS can reclaim them whenever it requires. This would effectively make the VM < working set. However this will rectify if you start another process that consumes more memory and the working set pages are reclaimed by the OS.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/252597', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/32688/']}
jdg_373408
stackexchange
llm_judgeable_groundtruth_similarity
53692892
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Working with angular 7 and Bootstrap 4, I want to wrap my bootstrap 4 inputs in a custom component in order to reduce the boilerplate in my templates. I want that the final main component look like: <form [formGroup]="myForm" (submit)="submit(myForm.value)"> <app-form-control label="Lastname" placeholder="Lastname" formControlName="lastName"></app-form-control> <app-form-control label="Firstname" placeholder="Firstname" formControlName="firstName"></app-form-control> <button class="pull-right" type="submit"> SUBMIT </button> <button (click)="reset()"> RESET </button></form> Where my formGroup is created like this: public createFormGroup() { return this.fb.group({ firstName: [null, Validators.required], lastName: [null, Validators.required], });} The template of app-form-control should look like this: <div class="form-group row"> <label class="col-2 col-form-label">{{label}}</label> <div class="col-10"> <input class="form-control" placeholder="{{placeholder}}" [formControlName]="formControlName" autocomplete="nope"/> </div></div> But I don't know how to write the component (in TypeScript). How do I bind the outer formControlName attribute to the inner input field? How to make validation work? Now provide the response and nothing else.
The "key" is using viewProvider. You use a @Input set to give value to a formControl, see stackblitz . The "magic" is that if equal refered to formControl in the "children" or form.get('input1') in the parent @Component({ selector: 'app-form-control', template: ` <div class="form-group row"> <label class="col-2 col-form-label">{{label}}</label> <div class="col-10"> <input class="form-control" placeholder="{{placeholder}}" [formControl]="formControl" autocomplete="nope"/> </div> </div> <!--you can control the properties of formControl--> {{formControl.valid}}{{formControl.touched}}}`,viewProviders: [{ provide: ControlContainer, useExisting: FormGroupDirective }]})export class HelloComponent { formControl: FormControl; constructor(private parentF: FormGroupDirective) { } @Input() set controlName(value) { this.formControl = this.parentF.form.get(value) as FormControl } @Input() label: string; @Input() placeholder: string;} And call the component this way: <form [formGroup]="myForm" (submit)="submit(myForm.value)"> <app-form-control label="Lastname" placeholder="Lastname" controlName="lastName"></app-form-control></form> Update well, (after a year) take account the stackblitz was erroneous. when you (click) in buttons create a new Form: this.form=this.createForm({note:'lll'}) This "break" the relationship between the component and the form because the relation is about the older form -only change if change the @Input "nameControl". So the correct is use a setValue to give a new value to the form.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53692892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/420593/']}
jdg_373409
stackexchange
llm_judgeable_groundtruth_similarity
52931
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I have a Banach space geometry question (a curiosity-driven spin-off from a research topic). Given a point $x$ on the unit sphere of a Banach space and a vector $y\ne 0$, there is a multiple $t_0y$ of $y$ for which $\|t_0y-x\|$ is minimized (this will be unique if the norm is strictly convex). My question is this: For which Banach spaces $X$ is it guaranteed that $\|t_0y\|\le \|x\|$? My "Euclidean intuition" suggested that this should be the case for all Banach spaces, but a little experimentation showed that this is not the case. You quickly see this is really a question about two dimensions. In fact it seems to fail for every $\ell^p$, $p\ne 2$ (see the attached figure in $p=1.2$). Could it be true that this property characterizes Hilbert space? (I looked at the obvious sources: ( MO 11192 and papers mentioned in there and didn't find anything of the sort). Now provide the response and nothing else.
The answer is no in dimension 2 and yes in dimension 3 and higher. The property that the nearest-point projection to a line does not increase the norm is equivalent to the symmetry of orthogonality relation defined as follows: $x$ is orthogonal to $y$ iff $\|x+ty\|\ge\|x\|$ for all $t\in\mathbb R$. It is well-known that symmetry of this orthogonality relation in dimension $\ge 3$ implies that the norm is Euclidean, see e.g. Thompson's "Minkowski geometry", Theorem 3.4.10. This is not the case in dimension 2. There are many counter-examples (I believe they are called Radon planes). Basically you only need to ensure that every unit vector with its unit orthogonal one span a constant parallelogram area, this is easy to satisfy and is equivalent to the symmetry of orthogonality. For a simple explicit example (although non-smooth), consider a norm on the plane whose unit ball is a regular hexagon.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/52931', 'https://mathoverflow.net', 'https://mathoverflow.net/users/11054/']}
jdg_373410
stackexchange
llm_judgeable_groundtruth_similarity
2630236
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm getting strange error "'int' object has no attribute 'startswith'" I haven't used the word "startswith" in my python program. ?Does any one how to fix this -- or what it refers to ? Now provide the response and nothing else.
Something in your program is trying to call the startswith method of an object, probably because it expects it to be a string. You'll have to pay attention to the traceback to see what it is being called on, and why that is an integer instead of a string. Did you pass along an integer where a string was expected?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2630236', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/246365/']}
jdg_373411
stackexchange
llm_judgeable_groundtruth_similarity
13140545
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Let's say you want do something along the following lines: SELECT CASE WHEN (SELECT COUNT(id) FROM table WHERE column2 = 4) > 0 THEN 1 ELSE 0 END Basically just return 1 when there's one or more rows in the table, 0 otherwise. There has to be a grammatically correct way to do this. What might it be? Thanks! Now provide the response and nothing else.
Question: return 1 when there's one or more rows in the table, 0 otherwise : In this case, there is no need for COUNT . Instead, use EXISTS , which rather than counting all records will return as soon as any is found, which performs much better: SELECT CASE WHEN EXISTS (SELECT 1 FROM table WHERE column2 = 4) THEN 1 ELSE 0 END
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13140545', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/279112/']}
jdg_373412
stackexchange
llm_judgeable_groundtruth_similarity
60916271
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a working solution to build a TypeScript Node app when deploying to docker via the Dockerfile.template file: # Thanks: https://github.com/balenalabs/multicontainer-getting-startedFROM balenalib/%%BALENA_MACHINE_NAME%%-node# Defines our working directory in containerWORKDIR /usr/src/app# Install packagesRUN install_packages build-essential libraspberrypi-bin# Copies the package.json first for better cache on later pushesCOPY package.json package.json# Install npm dependencies on the balena.io build server,# making sure to clean up the artifacts it creates in order to reduce the image size.#NOTE: I removed --production option because I need to run tsc after COPY..., so larger image size remains for now.RUN JOBS=MAX npm i# This will copy all files in our root to the working directory in the containerCOPY . ./# Build from TypeScript# TODO: This feels messy. How may I run `npm i --production` followed by `tsc` before copying files in Docker?# Best answer so far for the next line: https://stackoverflow.com/questions/51083134/how-to-compile-typescript-in-dockerfileRUN ./node_modules/typescript/bin/tsc -p ./tsconfig.json# server.js will run when container starts up on the device# in package.json: "serve": "node dist/server.js"CMD ["npm", "run", "serve"] I'd prefer to run tsc before copying files over to the app directory. Also, running npm i --production so that I don't have to have dev dependencies in the app. This doesn't seem to work however after an npm i -g typescript (before COPY), without a command to be found. What's the best way to proceed? Update With Solution With the chosen solution , here's an updated script that works for the next person who searches for this: #################################### First Stage: Compile TypeScript ##################################### Thanks: https://stackoverflow.com/questions/60916271/how-do-i-run-typescript-tsc-before-copy-in-dockerfile-template/60917273#60917273FROM balenalib/%%BALENA_MACHINE_NAME%%-node AS build# Install needed packages to build raspicam Node dependencies.RUN install_packages build-essential libraspberrypi-binWORKDIR /usr/src/app# Install the Javascript dependencies, including all devDependencies.COPY package.json .RUN npm i# Copy the rest of the application in and build it.COPY . ./# RUN TypeScript buildRUN ./node_modules/typescript/bin/tsc -p ./tsconfig.json# Clean up node_modules to not include dev dependencies.RUN rm -rf ./node_modulesRUN JOBS=MAX npm i --production################################### Second Stage: Prepare Dist App ###################################FROM balenalib/%%BALENA_MACHINE_NAME%%-node# Defines our working directory in containerWORKDIR /usr/src/app# Install packagesRUN install_packages build-essential libraspberrypi-bin# This will copy all files in our root to the working directory in the containerCOPY --from=build /usr/src/app/dist distCOPY package.json package.json# server.js will run when container starts up on the deviceCMD ["npm", "run", "serve"] Now provide the response and nothing else.
You can use a multi-stage build for this. The first stage includes all of the development dependencies, including tsc ; the second stage only includes the files that are needed to run the built application. (I'm not familiar with the specific build environment you're using so this will be in terms of the standard node image.) # First stage: compile things.FROM node:12 AS buildWORKDIR /usr/src/app# (Install OS dependencies; include -dev packages if needed.)# Install the Javascript dependencies, including all devDependencies.COPY package.json .RUN npm install# Copy the rest of the application in and build it.COPY . .# RUN npm buildRUN npx tsc -p ./tsconfig.json# Now /usr/src/app/dist has the built files.# Second stage: run things.FROM node:12WORKDIR /usr/src/app# (Install OS dependencies; just libraries.)# Install the Javascript dependencies, only runtime libraries.COPY package.json .RUN npm install --production# Copy the dist tree from the first stage.COPY --from=build /usr/src/app/dist dist# Run the built application when the container starts.EXPOSE 3000CMD ["npm", "run", "serve"]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60916271', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1279000/']}
jdg_373413
stackexchange
llm_judgeable_groundtruth_similarity
53503270
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know in advance that, when writing a program in C or C++, even if I declare a function as "inline" the compiler is free to ignore this and decide not to expand it at each (or any) call. Is the opposite true as well? That is, can a compiler automatically inline a very short function that wasn't defined as inline if the compiler believes doing so will lead to a performance gain? Two other subquestions: is this behaviour defined somewhere in the ANSI standards? Is C different from C++ in this regard, or do they behave the same? Now provide the response and nothing else.
inline is non-binding with regards to whether or not a function will be inlined by the compiler. This was originally what it was intended to do. But since then, it's been realized that whether or not a function is worth inlining depends as much on the call site as the function itself and is best left to the compiler to decide. From https://en.cppreference.com/w/cpp/language/inline : Since this meaning of the keyword inline is non-binding, compilers are free to use inline substitution for any function that's not marked inline , and are free to generate function calls to any function marked inline. Those optimization choices do not change the rules regarding multiple definitions and shared statics listed above. Edit : Since you asked for C as well, from https://en.cppreference.com/w/c/language/inline : The intent of the inline specifier is to serve as a hint for the compiler to perform optimizations, such as function inlining, which require the definition of a function to be visible at the call site. The compilers can (and usually do) ignore presence or absence of the inline specifier for the purpose of optimization .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53503270', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/21258/']}
jdg_373414
stackexchange
llm_judgeable_groundtruth_similarity
2708178
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to use Python's doctest concept for classes, not just functions? If so, where shall I put the doctests - at the class' docstring, or at the constructor's docstring? To clarify, I'm looking for something like: class Test: """ >>> a=Test(5) >>> a.multiply_by_2() 10 """ def __init__(self, number): self._number=number def multiply_by_2(self): return self._number*2 Thanks in advance, Adam Now provide the response and nothing else.
You're missing the code to actually run the doctests at the bottom of the file: class Test: <snip>if __name__ == "__main__": import doctest doctest.testmod() As for where to put the tests: If it's testing the class as a whole, I'd put them in the class' docstring. If it's testing the constructor, I'd put them in the constructor's docstring. If it's testing a method (as it seems to be in this case), I'd actually put it them in that method's docstring.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2708178', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/51197/']}
jdg_373415
stackexchange
llm_judgeable_groundtruth_similarity
13288691
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been experimenting with using Spring 3.1's bean definition profiles and nested beans. I had hoped that I could define different beans depending on the active profile. Consider the following heavily over simplified example such that my Spring context contains something like <bean id="say" class="test.Say" p:hello-ref="hello"/><beans profile="prod"> <bean id="hello" class="test.Hello" p:subject="Production!"/></beans><beans profile="dev"> <bean id="hello" class="test.Hello" p:subject="Development!"/></beans> I get the following error: Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'say' defined in class path resource [applicationContext.xml]: Cannot resolve reference to bean 'hello' while setting bean property 'hello'; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'hello' is defined at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:328) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:106) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1360) aJava Result: 1 I was expecting that the hello bean would be defined according to the active Maven profile (in my case prod or dev ). I'm starting to think that the Spring active profiles ( spring.profiles.active ) may be completely unrelated to Maven profiles. Could somebody please explain where I am going wrong? (Is this even possible using profiles?). Now provide the response and nothing else.
I was expecting that the hello bean would be defined according to the active Maven profile (in my case prod or dev). I'm starting to think that the Spring active profiles (spring.profiles.active) may be completely unrelated to Maven profiles. That's true, they are unrelated. Here is how you can fix it: Make sure that the web.xml that you have in src/main/webapp/WEB-INF/ folder has the following context setting: <context-param> <param-name>spring.profile.active</param-name> <param-value>${profileName}</param-value></context-param> And then make sure that the maven-war-plugin has filtering turned on for the web.xml : <plugin> <artifactId>maven-war-plugin</artifactId> <version>2.3</version> <configuration> <filteringDeploymentDescriptors>true</filteringDeploymentDescriptors> </configuration></plugin> And then lastly in your profiles: <profiles> <profile> <id>dev</id> <properties> <profileName>dev</profileName> </properties> </profile> <profile> <id>prod</id> <properties> <profileName>prod</profileName> </properties> </profile></profiles> You could also add a default value in the normal properties section: <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <profileName>dev</profileName></properties> So if you run without the -P option the dev spring profile will be used. When running mvn package the web.xml will have the correct value for the spring.profile.active .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13288691', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/292219/']}
jdg_373416
stackexchange
llm_judgeable_groundtruth_similarity
7811985
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my Ruby on Rails application I am using the Facebox plugin for an Ajax pop up window. I have 2 pages called add_retail_stores/new.html.erb and add_retail_stores/new.js . The new.js page inherits all elements from the new.html.erb page so it looks exactly alike. I have a Google map script on the HTML page that works as it should. But the new.js page that pops up on my different page called add_store_prices.html.erb page( <%= link_to add_retail_store_path, :remote => true %> ) I get the error: Warning: A call to document.write() from an asynchronously-loaded external script was ignored. Source File: http://localhost:3000/add_store_prices Line: 0 I believe because it's trying to go through 2 functions/scripts. The first one for the Facebox and then the Google script. Anyone know how to handle this error? EDIT: I believe the Facebox plugin is using document.write but I am not sure where, perhaps in one of these 2 lines on my page? new.js: $.facebox('<%= escape_javascript(render :template => 'business_retail_stores/new.html') %>')$('#facebox form').data('remote','true'); Now provide the response and nothing else.
Don't use document.write. The script is being loaded asynchronously, which means it's detached from the document parsing state. There is quite literally NO WAY for the JS engine to know WHERE the document.write should be executed in the page. The external script could load instantaneously and the document.write executes where the <script src="..."> tag is, or it could hit a net.burp and load an hour later, which means the document.write gets tagged at the end of the page. It's quite literally a race condition, so JS engines will ignore document.writes from scripts loaded asynchronously. Convert the document.write to use regular DOM operations, guarded by a document.onload type handler.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7811985', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/604023/']}
jdg_373417
stackexchange
llm_judgeable_groundtruth_similarity
725590
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I’m asking because string comparisons are slow, but indexing is fast, and a lot of scripts I write are in bash, which to my knowledge performs a full string lookup for every executable call. All those ls ’s and grep ’s would be a little bit faster without performing a string lookup on each step. Of course, this now delves into compiler optimization. Anyways, is there a way to directly invoke a program in Linux using only its inode number (assuming you only had to look it up once for all invocations)? Now provide the response and nothing else.
The short answer is no. The longer answer is that linux user API doesn't support accessing files by any method using the inode number. The only access to the inode number is typically through the stat() system call which exposes the inode number, which can be useful for identifying if two filenames are the same file, but is not used for anything else. Accessing a file by inode would be a security violation, as it would bypass permissions on the directories that contain the file linked to the inode. The closest you can get to this would be accessing a file by open file handle. But you can't run a program from that either, and this would still require opening the file by a path. (As noted in comments, this functionality was added to linux for security reasons along with the rest of the *at system calls, but is not portable.) There's also numerous ways of using the inode number to find the file (basically, crawl the filesystem and use stat) and then run it normally, but this is the opposite of what you want, as it is enormously more expensive than just accessing the file by pathname and doesn't remove that cost either. Having said that, worrying about this type of optimization is probably moot, as Linux has already optimized the internal inode lookup a great deal. Also, traditionally, shells hash the path location of executables so they don't have to hunt for them from all directories in $PATH every time.
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/725590', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/441393/']}
jdg_373418
stackexchange
llm_judgeable_groundtruth_similarity
4325263
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: During the development of a Java webservice client I ran into a problem. Authentication for the webservice is using a client certificate, a username and a password. The client certificate I received from the company behind the webservice is in .cer format. When I inspect the file using a text editor, it has the following contents: -----BEGIN CERTIFICATE-----[Some base64 encoded data]-----END CERTIFICATE----- I can import this file as a certificate in Internet Explorer (without having to enter a password!) and use it to authenticate with the webservice. I was able to import this certificate into a keystore by first stripping the first and last line, converting to unix newlines and running a base64-decode. The resulting file can be imported into a keystore (using the keytool command). When I list the entries in the keystore, this entry is of the type trustedCertEntry . Because of this entry type (?) I cannot use this certificate to authenticate with the webservice. I'm beginning to think that the provided certificate is a public certificate which is being used for authentication... A workaround I have found is to import the certificate in IE and export it as a .pfx file. This file can be loaded as a keystore and can be used to authenticate with the webservice. However I cannot expect my clients to perform these steps every time they receive a new certificate. So I would like to load the .cer file directly into Java. Any thoughts? Additional info: the company behind the webservice told me that the certificate should be requested (using IE & the website) from the PC and user that would import the certificate later. Now provide the response and nothing else.
If you want to authenticate you need the private key - there is no other option. A certificate is a public key with extra properties (like company name, country,...) that is signed by some Certificate authority that guarantees that the attached properties are true. .CER files are certificates and don't have the private key. The private key is provided with a .PFX keystore file normally. If you really authenticate is because you already had imported the private key. You normally can import .CER certificates without any problems with keytool -importcert -file certificate.cer -keystore keystore.jks -alias "Alias"
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/4325263', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/526579/']}
jdg_373419
stackexchange
llm_judgeable_groundtruth_similarity
3912298
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When preparing a library (let's call it libfoo), I find myself presented with the following dilemma: do I write it as a C++ library with a C wrapper: namespace Foo { class Bar { ... };}/* Separate C header. #ifdef __cplusplus omitted for brevity. */extern "C" { typedef void *FooBar; FooBar* foo_bar_new() { return new Foo::Bar; } void foo_bar_delete(FooBar *bar) { delete bar; }} Or is it better to write it as a C library with a C++ wrapper: /* foo/bar.h. Again, #ifdef __cplusplus stuff omitted. */typedef struct { /* ... */} FooBar;void foo_bar_init(FooBar *self) { /* ... */ }void foo_bar_deinit(FooBar *self) { /* ... */ }/* foo/bar.hpp */namespace Foo { class Bar { /* ... */ FooBar self; } Bar::Bar() { foo_bar_init(&self); } Bar::~Bar() { foo_bar_deinit(&self); }} Which do you prefer, and why? I favour the latter because it means I don't have to worry about my C functions accidentally having exceptions bubble up, plus I prefer C as a language as I feel that it's a smaller semantic minefield. What do other people think? EDIT: So many good answers. Thanks all. It's a shame that I can only accept one. Now provide the response and nothing else.
Small points: When you write C library it is useful anywhere - in C, in C++ (with wrapper) and many other languages like Python, Java using bindings etc and most important it requires only C runtime. When you write C++ wrapper you also need to write a C wrapper, but it is not as simple as you think, for example: c_api.h: extern "C" { typedef void *Foo; Foo create_foo();} c_api.cpp: void *create_foo() { return new foo::Foo();} What is wrong? it may throw! and the program will crash as C does not have stackunwinding semantics. So you need something like: void *create_foo() { try { return new foo::Foo(); } catch(...) { return 0; }} And this for every C++ api function. So I think that writing a C library and providing a separate C++ wrapper is better solution. Also it would not require linking with C++ runtime library.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3912298', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/429232/']}
jdg_373420
stackexchange
llm_judgeable_groundtruth_similarity
44877379
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to use a service layer in the AutoMapper profile class in ASP.NET Core but when I inject service in the constructor it does not work. For example: public class UserProfile : Profile{ private readonly IUserManager _userManager; public UserProfile(IUserManager userManager) { _userManager = userManager; CreateMap<User, UserViewModel>() .ForMember(dest => dest.FullName, opt => opt.MapFrom(src => $"{src.FirstName} {src.LastName}")); }} And in Startup Class: public class Startup{ public IConfigurationRoot Configuration { set; get; } public Startup(IHostingEnvironment env) { //some code } public void ConfigureServices(IServiceCollection services) { services.AddSingleton<IActionContextAccessor, ActionContextAccessor>(); services.AddMvc(); services.AddScoped<IUsersPhotoService, UsersPhotoService>(); services.AddAutoMapper(typeof(UserProfile)); }} How do to do it? Now provide the response and nothing else.
To solve your problem you just need to wire IUserManager in DI, and make sure UserProfile dependency is resolved. public void ConfigureServices(IServiceCollection services){ // ... services.AddSingleton<IUserManager, UserManager>(); services.AddSingleton(provider => new MapperConfiguration(cfg => { cfg.AddProfile(new UserProfile(provider.GetService<IUserManager>())); }).CreateMapper());} And having that said, I would probably try to keep single responsibility per class, and not have any services injected into mapping profiles. You can populate your objects just before the mapping instead. This way it might be easier to unit test as well.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/44877379', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3152927/']}
jdg_373421
stackexchange
llm_judgeable_groundtruth_similarity
842609
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why does C# not allow const and static on the same line? In Java, you must declare a field as 'static' and 'final' to act as a constant. Why does C# not let you declare const's as final? I make the further distinction that in Java, every interface is public and abstract, whether this is explicitly declared or not. Aren't const's effectively static in nature? WHy does C# balk at this? Now provide the response and nothing else.
const and static really do mean different things, different storage mechanism, different initialisation. static is read/write, therefore must have memory allocated for storage and must be initialised at runtime. A static can be initialised with a literal value or an expression. In contrast, a const is immutable and must be initialised with a compile time constant (typically a literal value, or an expression that can be fully evaluated at compile time). The value is known at compile time so it can be embedded directly in the generated code, therefore requires no storage to be allocated at runtime.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/842609', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/101095/']}
jdg_373422
stackexchange
llm_judgeable_groundtruth_similarity
125027
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have some data that is heavy tailed (density of geocoded entities), and I want to use the GeoHistogram function to plot it, but with a log scale. Data can be found here . When plotted normally, the graph looks like this: geoState = WolframAlpha["state of New York", "Result"];Show[ GeoGraphics[{GeoStyling["OutlineMap"],GeoStyling[Lighter[Orange, 0.5]], Opacity[0.05],EdgeForm[{Thin,Gray}],Polygon[geoState]}, GeoScaleBar -> "Miles"], GeoHistogram[geoData,600 sq mi, GeoBackground -> None, GeoRangePadding -> 150 Quantity[1, "Kilometers"], ColorFunction -> "TemperatureMap", PlotLegends -> Automatic]] I would like a Log10 scale for the bin height ( hspec ). The function description for GeoHistogram says that the format GeoHistogram[*data, bspec, hspec]* should work. I can't seem to find the proper format for hspec to get log-scaled bin heights. I have tried "Log", , {"Log", 20} , and Log10[#]& with no joy. The documentation lists the syntax as GeoHistogram[*locs*,*bspec*,*hspec*] with the following options for *hspec* : "Count", "probability", "Intensity", "PDF", and fh , but does not specify the form of fh . Taking an example from Histogram and defining fh[bins_,counts_]:=Log10[counts_] does not work either. Unlike the Histogram function, the GeoHistogram hspec documentation does not list the form of {"Log",hspec} or option of "LogCounts". Now provide the response and nothing else.
This is a bit of a hack but you may be able to get what you want down this path. Instead of taking the Log10 of the data I'll try to change the color scale via the ColorFunction The base for the color function Clear[cf]cf[x_, min_, max_] := (Log10[x] - Log10[min])/( Log10[max] - Log10[min]); I use the function with the ColorFunction option min = 1;max = 2000;steps = Round[10^# & /@ Subdivide[Log10[min], Log10[2 max], 10]];obj = GeoHistogram[geoData, Quantity[600, ("Miles")^2], GeoBackground -> None, GeoRangePadding -> 150 Quantity[1, "Kilometers"],ColorFunctionScaling -> False,ColorFunction -> Function[{z}, ColorData["TemperatureMap"][cf[z, min, max]]], PlotLegends -> BarLegend[{ Function[{z}, ColorData["TemperatureMap"][cf[z, min, max]]], {min, max}}, steps]] You will want to play with the min and max values and possibly hand edit your steps for the scale.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/125027', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/7753/']}
jdg_373423
stackexchange
llm_judgeable_groundtruth_similarity
27920201
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have scheduled a Bash script to run on the 1st of the month but I need to create 2 variables in it with the 1st and last date of the previous month, whatever those may be. Is it possible to do this using just Bash? Now provide the response and nothing else.
Unlike some answers, this will work for the 31st and any other day of the month. I use it to output unix timestamps but the output format is easily adjusted. first=$(date --date="$(date +'%Y-%m-01') - 1 month" +%s)last=$(date --date="$(date +'%Y-%m-01') - 1 second" +%s) Example (today's date is Feb 14, 2019): echo $first $last 1546300800 1548979199 To output in other formats, change final +%s to a different format such as +%Y-%m-%d or omit for default format in your locale. In case you need, you can also back up an arbitrary number of months like this: # variable must be >= 1 monthsago=23 date --date="$(date +'%Y-%m-01') - ${monthsago} month" date --date="$(date +'%Y-%m-01') - $(( ${monthsago} - 1 )) month - 1 second" Example output (today's date is Feb 15, 2019): Wed Mar 1 00:00:00 UTC 2017 Fri Mar 31 23:59:59 UTC 2017
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27920201', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4428391/']}
jdg_373424
stackexchange
llm_judgeable_groundtruth_similarity
19668
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm a little confused about the difference of the memory access and the write-back stage in a RISC pipeline . We learned in class these following assumptions: arithmetic & logic: IF, OF, EX, WBload: IF, OF, EX, MA, WBstore: IF, OF, EX, MAbranch: IF, OF, EXIF=Instruction Fetch, OF=Operand Fetch, EX=Execution, MA=Memory Access, WB=Write-Back Lets say we have the following code now: I1: LD R1, 0(R2) ; load R1 from address 0 + R2 I2: ADD R1, R1, #1 ; R1 = R1 + 1 I3: ST 0(R2), R1 ; Store R1 at address 0 + R2 According to what I've learned I1 will pass all five stages, I2 won't have to access the memory, and I3 won't have a write-back. But then I wonder, how and where does I3 store the value then? Just in the memory? And I2 fetches the value from memory, but needs to write-back to some place other than the memory? So does that mean that write-back is always to the HDD? I think I'm missing some core concepts here, as to where the operand is fetched from and where it gets stored to. Now provide the response and nothing else.
In a classic 5-stage RISC pipeline, WB writes a value into a register. If the instruction doesn't write a value into a register (e.g. store), then that stage isn't used. I1 stores a result into a register (namely R1), so it uses WB. I2 stores a result into a register (R1) so it uses WB. I3 does not store a result into a register, so it doesn't use WB. You basically got it with I3: the store instruction stores to memory, so the "write" is performed inside MA. Incidentally, on "real" RISC machines, MA tends to be more complex (and multi-cycle) because of multi-level caches and virtual memory. A store instruction may cause a page fault, so in a sense it has a "result" that isn't just the memory access.
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/19668', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/12774/']}
jdg_373425
stackexchange
llm_judgeable_groundtruth_similarity
361661
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a database table (named Topics) which includes these fields : topicId name parentId and by using them I wanna populate a TreeView in c#. How can I do that ? Thanks in advance... Now provide the response and nothing else.
It will probably be something like this. Give some more detail as to what exactly you want to do if you need more. //In Page loadforeach (DataRow row in topics.Rows){ TreeNode node = new TreeNode(dr["name"], dr["topicId"]) node.PopulateOnDemand = true; TreeView1.Nodes.Add(node); } /// protected void PopulateNode(Object sender, TreeNodeEventArgs e) { string topicId = e.Node.Value; //select from topic where parentId = topicId. foreach (DataRow row in topics.Rows) { TreeNode node = new TreeNode(dr["name"], dr["topicId"]) node.PopulateOnDemand = true; e.Node.ChildNodes.Add(node); } }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/361661', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/44852/']}
jdg_373426
stackexchange
llm_judgeable_groundtruth_similarity
404440
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Just like we have the equation $y=mx+b$ for $\mathbb{R}^{2}$, what would be a equation for $\mathbb{R}^{3}$? Thanks. Now provide the response and nothing else.
You can describe a line in space as the intersection of two planes. Thus, $$\{(x,y,z)\in{\mathbb R}^3: a_1x+b_1y+c_1z=d_1 \text{ and } a_2x+b_2y+c_2z=d_2\}.$$Alternatively, you can use vector notation to describe it as$$\vec{p}(t) = \vec{p}_0 + \vec{d}t.$$ I used this relationship to generate this picture: This is largely a topic that you will learn about in a third semester calculus course, at least in the states.
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/404440', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/64460/']}
jdg_373427
stackexchange
llm_judgeable_groundtruth_similarity
4041266
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this regex which is supposed to remove sentence delimiters( . and ? ): sentence = sentence.replaceAll("\\.|\\?$",""); It works fine it converts "I am Java developer." to "I am Java developer" "Am I a Java developer?" to "Am I a Java developer" But after deployment we found that it also replaces any other dots in the sentence as "Hi.Am I a Java developer?" becomes "HiAm I a Java developer" Why is this happening? Now provide the response and nothing else.
The pipe ( | ) has the lowest precedence of all operators. So your regex: \\.|\\?$ is being treated as: (\\.)|(\\?$) which matches a . anywhere in the string and matches a ? at the end of the string. To fix this you need to group the . and ? together as: (?:\\.|\\?)$ You could also use: [.?]$ Within a character class . and ? are treated literally so you need not escape them.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4041266', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/489849/']}
jdg_373428
stackexchange
llm_judgeable_groundtruth_similarity
21461134
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: IΓ‘m facing a litte problem here. I tried to save the actual position of an jFrame, and restore this position after creating this frame again. Heres the "save"-procedure: private void formWindowClosing(java.awt.event.WindowEvent evt) { Parent.Throttle_vorhanden = 0; root.Hauptfenster.LastPositionThrottlePlate=this.getLocation(); this.dispose();} And here I try to set the old location again: public Throttle(Hauptfenster parent, root.rs232.rs232Connection Conn, Integer throttlePosition) { super(); Parent = parent; conn = Conn; actualThrottlePosition = throttlePosition; initComponents(); jScrollBar2.setValue(root.Hauptfenster.MPR); this.setTitle("Throttle"); this.setVisible(true); if (root.Hauptfenster.LastPositionThrottlePlate!=null) { this.setLocation(root.Hauptfenster.LastPositionThrottlePlate); } For a very little bit a secound, I see the Frame on its old position, but then it "jumps" in the middle of the screen... Can anyone imagine why jFrame behavior like this?sorry for my english... Now provide the response and nothing else.
Put your db query in a function with callback, and make the callback fire the function again 10sec later: function mydbquery(callback) { db.query("SELECT * FROM queue WHERE cooldown > UNIX_TIMESTAMP(NOW()) AND simulated=0 ORDER BY cooldown DESC LIMIT 1", function(err, rows){ if(err != null){ die("Query error: " + err); } if(rows < 1){ die("No rows"); } //Set the vars from the query var name = rows[0]['name']; var ip = rows[0]['ip']; var iterations = rows[0]['runs']; var bin = "/home/hoar/sum/run" var args = ['arg='+name, 'arg2='+iterations, 'path=/var/www/upload/'+name+'.html', 'output=log.log']; var proc = spawn(bin, args); var time = "/.*/"; var pct = "/^\d/"; var name = rows[0]['name']; var ip = rows[0]['ip']; var iterations = rows[0]['runs']; proc.stdout.setEncoding('utf8'); proc.stdout.on('data', function(data) { var str = data.toString(); var s = str.split("|"); var p = s[0].split("/"); var t = (s[1] == null) ? "" : s[1]; if(p != null && s[0] != "@"){ //Needed to check for @ because the program prints this as first line, which is good then we can do the query further done only once. //Check the return numbers from simc to see how many sims it has done if(parseInt(p[0]) < parseInt(p[1])){ //Check if the 6th match is a number and the 7th only contains letters if(t != null){ var time = t.replace(/(\r\n|\n|\r)/gm,""); //Remove any line disturbers for db //Update the database with the amount of time left on the simulation db.query("UPDATE `queue` SET `status`=" + db.escape(time) + " WHERE (`name`=" + name + ")"); //console.log(p[0]+"/"+p[1] + " - " + t + " left"); } //console.log(p[0]+"/"+p[1] + " iterations done"); } }else{ //If the stdout is null run this query since we don't want to run this more than once. db.query("UPDATE `queue` SET `simulated`='2' WHERE (`name`=" + name + " AND simulated!='2')"); //console.log("Updated db to 2"); } }); proc.stderr.on('data', function (data) { var str = data.toString(); //If the program returns stderr we want to make sure it stops and we update the database to let the user know. if(str.indexOf("ERROR! Setup failure...")){ //Update the database with the amount of time left on the simulation db.query("UPDATE `queue` SET `simulated`='3' WHERE (`name`=" + name + ")"); //Kill the DB connection db.destroy(); die("There was an error: " + data); } }); proc.on('exit', function (code) { //Setup the ftp connection var ftp = new JSFtp({ host: "ftp", port: 21, user: "ftp", pass: "ftp" }); //Simulation ended with success update the database and kill. db.query("UPDATE `queue` SET `simulated`='1' WHERE (`name`=" + name + " AND simulated='2')"); ftp.put('/var/www/upload/'+rows[0]['name']+'.html', 'public_html/mysite/'+ip2long(rows[0]['ip'])+'/'+rows[0]['name']+'.html', function(hadError) { if (!hadError) console.log("FTP error"); ftp.raw.quit(); }); db.destroy(); //die("Simulation is done");//NEW CODE!!! callback();//END OF NEW CODE });});//end sql}//NEW CODE!!!function wait10sec(){ setTimeout(function(){ mydbquery(wait10sec); }, 10000);}mydbquery(wait10sec);//END OF NEW CODE So it will do your query, then wait 10sec before firing another.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21461134', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2320323/']}
jdg_373429
stackexchange
llm_judgeable_groundtruth_similarity
53048037
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following simple short-circuit statement that should show either a component or nothing: {profileTypesLoading && <GeneralLoader />} If the statement is false, it renders a 0 instead of nothing. I have done a console.log(profileTypesLoading) just to see quickly what the status of the profileTypesLoading property is and it's either 1 or 0 as expected. 0 should be false... causing nothing to render. Right? Any idea why this would happen? Now provide the response and nothing else.
Since your condition is falsy and so doesn't return the second argument ( <GeneralLoader /> ), it will return profileTypesLoading , which is a number, so react will render it because React skips rendering for anything that is typeof boolean or undefined and will render anything that is typeof string or number : To make it safe, you can either use a ternary expression {condition ? <Component /> : null} or boolean cast your condition like {!!condition && <Component />}
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/53048037', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1803539/']}
jdg_373430
stackexchange
llm_judgeable_groundtruth_similarity
142385
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A website "broke" after I changed my password to something like "NΓŒΓΏΓ–ΓΓ―Β£Γ˜Β₯ΓΌQ’¨¼Ü9¨ÝIÇÅbÍm". I was unable to log in, and customer service deleted my account and had me create a new one. Does this imply security flaws in the site's code? Should I worry about my credentials? Obviously I'm using a password manager, and the website only allows me to reserve seats in a cinema, so the credentials are not of much value. Now provide the response and nothing else.
Your description is that the site fails to properly validate their input. This (weakly) implies a deep flaw in their code. If your input had simply choked their routine that calls PBKDF2() , then your password hash might not have been reproducible; but I would expect a simple password reset should have been adequate to clear up that problem. Deleting your account might indicate that your account record was corrupt; however, deleting accounts might simply be their response to anyone who has a password problem due to unexpected user input. They might even be trying to actively thwart hackers with this response. Also, flawed doesn't necessarily mean their site is vulnerable. The defective code would need to be exploitable, and you didn't supply evidence of that. Such evidence might include erratic behavior or inexplicably changed values. If you decide to press further, perhaps testing individual password characters to isolate the glyph that caused their site to lock your account, know that they would be within their rights to consider those attempts to be a hacking attack. Seek the site owner's permission before experimenting. Note that if instead of using high-bit-set characters, you construct your password from 16 cryptographically random, high-bit-unset, standard, ordinary, printable ASCII alphanumeric characters, the practical difference to your password's security will be irrelevant.
{}
{'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/142385', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/92207/']}
jdg_373431
stackexchange
llm_judgeable_groundtruth_similarity
44280314
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: There are three tabs on page. Input,output and review. Html form is in input tab where user enter input in text box. I want to calculate something from that input value and display it in next tab OUTPUT by clicking submit button. I already have that file which will do the calculation with that input. Answers are really appreciated. <div class="container tabs-wrap"> <ul class="nav nav-tabs" role="tablist"> <li role="presentation" class="active"> <a href="#input" aria-controls="input" role="tab" data-toggle="tab" aria-expanded="true">Input</a> </li> <li> <a href="#output" aria-controls="output" role="tab" data-toggle="tab" aria-expanded="false">Output</a> </li> <li> <a href="#review" aria-controls="review" role="tab" data-toggle="tab" aria-expanded="false">Review &amp; Email</a> </li> </ul><div class="tab-content"> <div role="tabpanel" class="tab-pane active" id="input"> <h2>Web based two tier ground mount bill of material generator</h2> <form class="form-horizontal" action="output.php" method="post" target="_blank"> <div class="form-group"> <label class="control-label col-sm-2" for="count">Module Count:</label> <div class="col-sm-10"> <input type="number" class="form-control" id="count" placeholder="Enter Number of Modules" name="count"> </div> </div> <div class="form-group"> <div class="col-sm-offset-2 col-sm-10"> <button type="submit" class="btn btn-primary continue">Continue</button> </div> </div> </form> Now provide the response and nothing else.
Having something like this: let headers = new Headers();headers.append('Content-Type', 'application/json');headers.append('projectid', this.id);let params = new URLSearchParams();params.append("someParamKey", this.someParamValue)this.http.get('http://localhost:63203/api/CallCenter/GetSupport', { headers: headers, search: params }) Of course, appending every param you need to params . It gives you a lot more flexibility than just using a URL string to pass params to the request. EDIT(28.09.2017): As Al-Mothafar stated in a comment, search is deprecated as of Angular 4, so you should use params EDIT(02.11.2017): If you are using the new HttpClient there are now HttpParams , which look and are used like this: let params = new HttpParams().set("paramName",paramValue).set("paramName2", paramValue2); //Create new HttpParams And then add the params to the request in, basically, the same way: this.http.get(url, {headers: headers, params: params}); //No need to use .map(res => res.json()) anymore More in the docs for HttpParams and HttpClient
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/44280314', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6309100/']}
jdg_373432
stackexchange
llm_judgeable_groundtruth_similarity
1895398
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My general question is to find, if this is possible, two real numbers $a,b$ such that $K=\Bbb Q(a,b)$ is not a simple extension of $\Bbb Q$. $\newcommand{\Q}{\Bbb Q}$ Of course $a$ and $b$ can't be both algebraic, otherwise $K$ would be a separable ($\Q$ has characteristic $0$) and finite extension, which has to be simple. So I tried with $\Q(\sqrt 2, e)$ but any other example would be accepted. The field $\Q(\sqrt 2, e)$ has transcendence degree $1$ over $\Q$, but I'm not sure if this imply that it is isomorphic to $\Q(a)$ for some transcendental number $a$ (the fact that two fields have the same transcendence degree over another field shouldn't imply that the fields are isomorphic). I'm not sure about the relation between the algebraic independence of $a$ and $b$, and the fact that $\Q(a,b)/\Q$ is a simple extension.Notice that $\Q(\pi, e)$ is probably unknown to be a simple extension of $\Q$. Thank you for your help! Now provide the response and nothing else.
The extension $\mathbb{Q}(\sqrt{2},e)\supset\mathbb{Q}$ is not simple. If $\mathbb{Q}(u)=\mathbb{Q}(\sqrt{2},e)$, then $\mathbb{Q}(u)$ is infinite-dimensional over $\mathbb{Q}$, so $u$ is transcendental. But then $\mathbb{Q}(u)$ is purely transcendental over $\mathbb{Q}$ while $\mathbb{Q}(\sqrt{2},e)$ is not.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1895398', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/298680/']}
jdg_373433
stackexchange
llm_judgeable_groundtruth_similarity
51677855
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the Google C++ Style Guide , on the topic of "Unsigned Integers", it is suggested that Because of historical accident, the C++ standard also uses unsigned integers to represent the size of containers - many members of the standards body believe this to be a mistake, but it is effectively impossible to fix at this point. The fact that unsigned arithmetic doesn't model the behavior of a simple integer, but is instead defined by the standard to model modular arithmetic (wrapping around on overflow/underflow), means that a significant class of bugs cannot be diagnosed by the compiler. What is wrong with modular arithmetic? Isn't that the expected behaviour of an unsigned int? What kind of bugs (a significant class) does the guide refer to? Overflowing bugs? Do not use an unsigned type merely to assert that a variable is non-negative. One reason that I can think of using signed int over unsigned int, is that if it does overflow (to negative), it is easier to detect. Now provide the response and nothing else.
Some of the answers here mention the surprising promotion rules between signed and unsigned values, but that seems more like a problem relating to mixing signed and unsigned values, and doesn't necessarily explain why signed variables would be preferred over unsigned outside of mixing scenarios. In my experience, outside of mixed comparisons and promotion rules, there are two primary reasons why unsigned values are bug magnets as follows. Unsigned values have a discontinuity at zero, the most common value in programming Both unsigned and signed integers have a discontinuities at their minimum and maximum values, where they wrap around (unsigned) or cause undefined behavior (signed). For unsigned these points are at zero and UINT_MAX . For int they are at INT_MIN and INT_MAX . Typical values of INT_MIN and INT_MAX on system with 4-byte int values are -2^31 and 2^31-1 , and on such a system UINT_MAX is typically 2^32-1 . The primary bug-inducing problem with unsigned that doesn't apply to int is that it has a discontinuity at zero . Zero, of course, is a very common value in programs, along with other small values like 1,2,3. It is common to add and subtract small values, especially 1, in various constructs, and if you subtract anything from an unsigned value and it happens to be zero, you just got a massive positive value and an almost certain bug. Consider code iterates over all values in a vector by index except the last 0.5 : for (size_t i = 0; i < v.size() - 1; i++) { // do something } This works fine until one day you pass in an empty vector. Instead of doing zero iterations, you get v.size() - 1 == a giant number 1 and you'll do 4 billion iterations and almost have a buffer overflow vulnerability. You need to write it like this: for (size_t i = 0; i + 1 < v.size(); i++) { // do something } So it can be "fixed" in this case, but only by carefully thinking about the unsigned nature of size_t . Sometimes you can't apply the fix above because instead of a constant one you have some variable offset you want to apply, which may be positive or negative: so which "side" of the comparison you need to put it on depends on the signedness - now the code gets really messy. There is a similar issue with code that tries to iterate down to and including zero. Something like while (index-- > 0) works fine, but the apparently equivalent while (--index >= 0) will never terminate for an unsigned value. Your compiler might warn you when the right hand side is literal zero, but certainly not if it is a value determined at runtime. Counterpoint Some might argue that signed values also have two discontinuities, so why pick on unsigned? The difference is that both discontinuities are very (maximally) far away from zero. I really consider this a separate problem of "overflow", both signed and unsigned values may overflow at very large values. In many cases overflow is impossible due to constraints on the possible range of the values, and overflow of many 64-bit values may be physically impossible). Even if possible, the chance of an overflow related bug is often minuscule compared to an "at zero" bug, and overflow occurs for unsigned values too . So unsigned combines the worst of both worlds: potentially overflow with very large magnitude values, and a discontinuity at zero. Signed only has the former. Many will argue "you lose a bit" with unsigned. This is often true - but not always (if you need to represent differences between unsigned values you'll lose that bit anyways: so many 32-bit things are limited to 2 GiB anyways, or you'll have a weird grey area where say a file can be 4 GiB, but you can't use certain APIs on the second 2 GiB half). Even in the cases where unsigned buys you a bit: it doesn't buy you much: if you had to support more than 2 billion "things", you'll probably soon have to support more than 4 billion. Logically, unsigned values are a subset of signed values Mathematically, unsigned values (non-negative integers) are a subset of signed integers (just called _integers). 2 . Yet signed values naturally pop out of operations solely on unsigned values, such as subtraction. We might say that unsigned values aren't closed under subtraction. The same isn't true of signed values. Want to find the "delta" between two unsigned indexes into a file? Well you better do the subtraction in the right order, or else you'll get the wrong answer. Of course, you often need a runtime check to determine the right order! When dealing with unsigned values as numbers, you'll often find that (logically) signed values keep appearing anyways, so you might as well start of with signed. Counterpoint As mentioned in footnote (2) above, signed values in C++ aren't actually a subset of unsigned values of the same size, so unsigned values can represent the same number of results that signed values can. True, but the range is less useful. Consider subtraction, and unsigned numbers with a range of 0 to 2N, and signed numbers with a range of -N to N. Arbitrary subtractions result in results in the range -2N to 2N in _both cases, and either type of integer can only represent half of it. Well it turns out that the region centered around zero of -N to N is usually way more useful (contains more actual results in real world code) than the range 0 to 2N. Consider any of typical distribution other than uniform (log, zipfian, normal, whatever) and consider subtracting randomly selected values from that distribution: way more values end up in [-N, N] than [0, 2N] (indeed, resulting distribution is always centered at zero). 64-bit closes the door on many of the reasons to use unsigned values as numbers I think the arguments above were already compelling for 32-bit values, but the overflow cases, which affect both signed and unsigned at different thresholds, do occur for 32-bit values, since "2 billion" is a number that can exceeded by many abstract and physical quantities (billions of dollars, billions of nanoseconds, arrays with billions of elements). So if someone is convinced enough by the doubling of the positive range for unsigned values, they can make the case that overflow does matter and it slightly favors unsigned. Outside of specialized domains 64-bit values largely remove this concern. Signed 64-bit values have an upper range of 9,223,372,036,854,775,807 - more than nine quintillion . That's a lot of nanoseconds (about 292 years worth), and a lot of money. It's also a larger array than any computer is likely to have RAM in a coherent address space for a long time. So maybe 9 quintillion is enough for everybody (for now)? When to use unsigned values Note that the style guide doesn't forbid or even necessarily discourage use of unsigned numbers. It concludes with: Do not use an unsigned type merely to assert that a variable is non-negative. Indeed, there are good uses for unsigned variables: When you want to treat an N-bit quantity not as an integer, but simply a "bag of bits". For example, as a bitmask or bitmap, or N boolean values or whatever. This use often goes hand-in-hand with the fixed width types like uint32_t and uint64_t since you often want to know the exact size of the variable. A hint that a particular variable deserves this treatment is that you only operate on it with with the bitwise operators such as ~ , | , & , ^ , >> and so on, and not with the arithmetic operations such as + , - , * , / etc. Unsigned is ideal here because the behavior of the bitwise operators is well-defined and standardized. Signed values have several problems, such as undefined and unspecified behavior when shifting, and an unspecified representation. When you actually want modular arithmetic. Sometimes you actually want 2^N modular arithmetic. In these cases "overflow" is a feature, not a bug. Unsigned values give you what you want here since they are defined to use modular arithmetic. Signed values cannot be (easily, efficiently) used at all since they have an unspecified representation and overflow is undefined. 0.5 After I wrote this I realized this is nearly identical to Jarod's example , which I hadn't seen - and for good reason, it's a good example! 1 We're talking about size_t here so usually 2^32-1 on a 32-bit system or 2^64-1 on a 64-bit one. 2 In C++ this isn't exactly the case because unsigned values contain more values at the upper end than the corresponding signed type, but the basic problem exists that manipulating unsigned values can result in (logically) signed values, but there is no corresponding issue with signed values (since signed values already include unsigned values).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/51677855', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7586189/']}
jdg_373434
stackexchange
llm_judgeable_groundtruth_similarity
231017
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I read a book in which one chapter gave a speech about the fundamental constants of the Universe, and I remember it stated this: If the mass of an electron, the Planck constant, the speed of light, or the mass of a proton were even just slightly different (smaller or bigger) than what they actually are, then the whole Universe would not exist as we know it. Maybe we all wouldn't exist. This speech works for all the fundamental known constants of the Universe but one: the Boltzmann constant. Its value is well known but even if its value were $10$ times bigger or if it were exactly $1$ , or $45.90$ or $10^6$ well... the Universe would remain the same as it is now. The Boltzmann constant is not really fundamental to the existence of the Universe. Maybe they weren't the exact words, but the concept is correct. Now I ask: is that true, and why? Now provide the response and nothing else.
We can understand all of this business if we visit the statistical mechanics notion of temperature, and then connect it to experimental realities. Temperature is a Lagrange multiplier (and should have dimensions of energy) First we consider the statistical mechanics way of defining temperature.Given a physical system with some degree of freedom $X$, denote the number of possible different states of that system when $X$ takes the value $x$ by the symbol $\Omega(x)$.From statistical considerations we can show that modestly large systems strongly tend to sit in states such that $\Omega(x)$ is maximized.In other words, to find the equilibrium state $x_\text{eq}$ of the system you would write$$ \left. \left( \frac{d\Omega}{dx} \right) \right|_{x_\text{eq}} = 0$$ and solve for $x_\text{eq}$.It's actually more convenient to work with $\ln \Omega$ so we'll do that from now on. Now suppose we add the constraint that the system has a certain amount of energy $E_0$.Denote the energy of the system when $X$ has value $x$ by $E(x)$.In order to find the equilibrium value $x_\text{eq}$, we now have to maximize $\ln \Omega$ with respect to $x$, but keeping the constraint $E(x)=E_0$.The method of Lagrange multipliers is the famous mathematical tool used to solve such problems.One constructs the function$$\mathcal{L}(x) \equiv \ln \Omega(x) + t (E_0 - E(x))$$and minimizes $\mathcal{L}$ with respect to $x$ and $t$.The parameter $t$ is the Lagrange multiplier; note that it has dimensions of inverse energy.The condition $\partial \mathcal{L} / \partial x = 0$ leads to$$t \equiv \frac{\partial \ln \Omega}{\partial x} \frac{\partial x}{\partial E} \implies t = \frac{\partial \ln \Omega}{\partial E} \, .$$Now remember the thermodynamic relation$$\frac{1}{T} = \frac{\partial S}{\partial E} \, .$$Since the entropy $S$ is defined as $S \equiv k_b \ln \Omega$ we see that the temperature is actually$$T = \frac{1}{k_b t} \, .$$In other words, the thing we call temperature is just the (reciprocal of the) Lagrange multiplier which comes from having fixed energy when you try to maximize the entropy of a system, but multiplied by a constant $k_b$. Logically, $k_b$ doesn't need to exist If not for the $k_b$ then temperature would have dimensions of energy!You can see from the discussion above that $k_b$ is very much just an extra random constant that doesn't need to be there.Entropy could have been defined as a dimensionless quantity, i.e. $S \equiv \ln \Omega$ without the $k_b$ and everything would be fine.You'll notice in calculations that $k_b$ and $T$ almost always shows up together; it's no accident and it's basically because, as we said, $k_b$ is just a dummy factor which converts energy to temperature. But then there's history :( Folks figured out thermodynamics before statistical mechanics.In particular, we had thermometers.People measured the "hotness" of stuff by looking at the height of a liquid in a thermometer.The height of a thermometer reading was the definition of temperature; no relation to energy. Entropy was defined as heat transfer divided by temperature.Therefore, entropy has dimensions of $[\text{energy}] / [\text{temperature}]$.$^{[a]}$ We measured the temperatures $T$, pressures $P$, volumes $V$, and number of particles $N$ of some gasses and found that they always obeyed the ideal gas law $^{[b,c]}$ $$P V = N k_b T \, .$$ This law was known from experiment for a long time before Boltzmann realized that entropy is actually proportional to the logarithm of the number of available microstates, a dimensionless quantity.However, since entropy was already defined and had this funny temperature dimensions, he had to inject a dimensioned quantity for "backwards compatibility".He was the first to write$$ S = k_b \ln \Omega$$and this equation is so important that it's on his tomb . Connecting temperature and energy In practice, it is actually rather difficult to measure temperature and energy in the same system over many orders of magnitude.I think that it's for this reason that we still have independent temperature and energy standards and units. Summary Boltzmann's constant is just a conversion between energy and a made-up dimension we call "temperature". Logically, temperature should have dimensions of energy and Boltzmann's constant is just a dummy that converts between the two for historical reasons. Boltzmann's constant contains no physical meaning whatsoever. Note that the value of $k_b$ isn't the real issue; values of constants depend on the units system you use. The important point is that, unlike the speed of light or the mass of the proton, $k_b$ doesn't refer to any unit-independent physical thing in Nature. Temperature is the Langrange multiplier that comes from imposing fixed energy on the problem of maximizing entropy. As such, it logically has dimensions of energy. Boltzmann's constant $k_b$ only exists because people defined temperature and entropy before they understood statistical mechanics. You will always see $k_b$ and $T$ together because the only logically relevant parameter is $k_b T$, which has dimensions of energy. Notes $[a]$: Note that if temperature had dimensions of energy then under this definition entropy would have been dimensionless (as it "should" be). $[b]$: Actually, this law was originally written as $PV = n R T$ where $n$ is the number of moles of a substance and $R$ is the ideal gas constant. That's not really important though because you can group Avogadro's number in with $R$ to get $k_b$. $R$ and $k_b$ have equivalent "status". $[c]$: Note again how $k_b$ and $T$ show up together.
{}
{'log_upvote_score': 8, 'links': ['https://physics.stackexchange.com/questions/231017', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/90893/']}
jdg_373435
stackexchange
llm_judgeable_groundtruth_similarity
11912221
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've seen some similar types of questions on SO, however, I have not been able to find a solution to my specific issue. (FYI, these are not my real columns, just a shortened example). I have a basic table my_table : user_1 user_2 timestamp note(not part of table) 23 25 2012-08-10 22:00:00 24 22 2012-08-10 19:00:00 <=== I would like to return this row 24 22 2012-08-10 17:00:00 21 17 2012-08-10 15:00:00 So, what I want to do is be able to: 1) Select the "newest" row, based on timestamp AND 2) Select the 'user_2' column when given a value. I have tried something like: SELECT * FROM my_table WHERE user_2 = 22 AND timestamp = ( SELECT MAX( timestamp ) FROM my_table ) LIMIT 1 But this does not return the row I am looking for. Any help on fixing this query would be great. Thanks very much. Now provide the response and nothing else.
SELECT * FROM my_table -- standard stuff WHERE user_2 = 22 -- predicate ORDER BY timestamp DESC -- this means highest number (most recent) first LIMIT 1; -- just want the first row Edit: By the way, in case you're curious why your original query didn't work, let's break down the pieces: select some stuff from my_table ... where user_2 = 22 and timestamp = (some value, let's put it aside for now) limit 1 Now, coming back to that timestamp value, it comes from your subquery: SELECT MAX( timestamp ) FROM my_table Note that this subquery doesn't restrict any rows based on user_2 -- it asks for what's the max timestamp in the whole table . That max timestamp is the first one in your table above: (user_1 = 23, user_2 = 25, timestamp = 2012-08-10 22:00:00). So, let's plug that back to the top-level query: select some stuff from my_table ... where user_2 = 22 and timestamp = 2012-08-10 22:00:00 limit 1 ... and you can see there isn't such a row.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11912221', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/143895/']}
jdg_373436
stackexchange
llm_judgeable_groundtruth_similarity
59500294
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the difference between tf-nightly and tensorflow in PyPI? pip install tf-nightlypip install tensorflow Which one is reliable? https://pypi.org/project/tf-nightly/ https://pypi.org/project/tensorflow/ Now provide the response and nothing else.
Just to add to what Ben Souchet wrote : As its name suggests, the tf-nightly pip package is built and released to PyPI every night (barring any build failures, which happens rarely). As a result, you can see an almost once-per-day version update history . It has the latest features close to the HEAD of the master branch of github.com/tensorflow . Therefore, if you want the latest features, improvements and bug fixes, such as the ones committed after the last stable tensorflow release (see below), you should use pip install tf-nightly . But the downside is that because tf-nightly releases are not subject to the same strict set of release testing as tensorflow , it'll occasionally include bugs that will be fixed later. Also, since it's built from HEAD, it'll reflect intermediate developments status such as incompleteness in features. The tensorflow pip package is released by a semantic-version-based schedule. A new version is pushed out approximately once 2-6 months. Due to the comprehensive set of release testing jobs, the quality is higher than tf-nightly . The documentation on https://www.tensorflow.org/api_docs/python/ is updated once per minor version bump in the tensorflow pip package.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/59500294', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3702377/']}
jdg_373437
stackexchange
llm_judgeable_groundtruth_similarity
31916049
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The file, there is a "tab" between the each part. 00000420150331001014C A123456(7) PAL 02.04.2015PM 08.04.2015PMC A123456(7) PAL 02.04.2015PM 08.04.2015PM C A123456(7) PAL 02.04.2015PM 08.04.2015PMC A123456(7) PAL 02.04.2015PM 08.04.2015PMEND I want to write a batch script file using for loop to count how many PAL in the file.But,due to my ability, i just can draft the following,any suggestion? @echo offsetlocal enabledelayedexpansionfor /f "tokens=3 delims= " %%a in (H071_20150331.TXT) do (echo %%a pause) But it doesn't work. Now provide the response and nothing else.
A staticmethod object is a descriptor . The magic you are missing is that Python calls the __get__ method when accessing the object as an attribute on a class or instance. So accessing the object as C.foo results in Python translating that to C.__dict__['foo'].__get__(None, C) , while instance_of_C.foo becomes type(instace_of_C).__dict__['foo'].__get__(instance_of_C, type(instance_of_C)) . The staticmethod object is defined in C code , but an equivalent in Python would be: class staticmethod(object): def __init__(self, callable): self.f = callable def __get__(self, obj, type=None): return self.f @property def __func__(self): return self.f where self.f is the original wrapped function. All this is needed because functions are themselves descriptors too; it is the descriptor protocol that gives you method objects (see python bound and unbound method object for more details). Since they too have a __get__ method, without a staticmethod object wrapping the function, a functionobj.__get__ call produces a method object instead, passing in a self argument. There is also a classmethod , which uses the second argument to descriptor.__get__ to bind a function to the class, and then there are property objects, which translate binding into a function call directly. See How does the @property decorator work? .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31916049', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5088427/']}
jdg_373438
stackexchange
llm_judgeable_groundtruth_similarity
7349183
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My understanding of constructor chaining is that , when there are more than one constructors in a class (overloaded constructors) , if one of them tries to call another constructor,then this process is called CONSTRUCTOR CHAINING , which is not supported in C++ .Recently I came across this paragraph while reading online material.... It goes like this ... You may find yourself in the situation where you want to write a member function to re-initialize a class back to default values. Because you probably already have a constructor that does this, you may be tempted to try to call the constructor from your member function. As mentioned, chaining constructor calls are illegal in C++. You could copy the code from the constructor in your function, which would work, but lead to duplicate code. The best solution in this case is to move the code from the constructor to your new function, and have the constructor call your function to do the work of initializing the data. Does a member function calling the constructor also come under constructor chaining ??Please throw some light on this topic in C++ . Now provide the response and nothing else.
The paragraph basically says this: class X{ void Init(params) {/*common initing code here*/ } X(params1) { Init(someParams); /*custom code*/ } X(params2) { Init(someOtherParams); /*custom code*/ } }; You cannot call a constructor from a member function either. It may seem to you that you've done it, but that's an illusion: class X{public: X(int i):i(i){} void f() { X(3); //this just creates a temprorary - doesn't call the ctor on this instance } int i;};int main(){ using std::cout; X x(4); cout << x.i << "\n"; //prints 4 x.f(); cout << x.i << "\n"; //prints 4 again}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7349183', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2666601/']}
jdg_373439
stackexchange
llm_judgeable_groundtruth_similarity
57355411
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I got below TypeScript error when tried to use side component (outside of project directory): TS2345: Argument of type '{ template: string; components: { SimpleCheckbox: typeof SimpleCheckbox; }; }' is not assignable to parameter of type 'VueClass<Vue>'.Object literal may only specify known properties, and 'template' does not exist in type 'VueClass<Vue>'. My WebStorm IDE did not detect this error; in was outputted in console when I ran Webpack with TypeScript loader. The error occurs in: import { Vue, Component, Prop } from 'vue-property-decorator';import template from './SkipProjectInitializationStepPanel.pug';import SimpleCheckbox from './../../../../ui-kit-libary-in-development/UiComponents/Checkboxes/MaterialDesign/SimpleCheckbox.vue';@Component({ template, components: { SimpleCheckbox } }) // here !export default class SkipProjectInitializationStepPanel extends Vue { @Prop({ type: String, required: true }) private readonly text!: string;} As follows from ui-kit-libary-in-development name, this is not npm-dependency yet, so it is not inside node_modules for now. It was exclusively TypeScript error; although ts-loader casts this error, Webpack builds my project and compiled JavaScript works correctly. This error will disappear if to do one of below actions: Move SimpleCheckbox.vue to same directory as SkipProjectInitializationStepPanel.ts and import it as import SimpleCheckbox from './SimpleCheckbox.vue'; Remove SimpleCheckbox from @Component({ template, components: { SimpleCheckbox } }) and leave only @Component({ template, components: {} }) (off course, SimpleCheckbox will no be rendered in this case, but it proves that problem is not in SkipProjectInitializationStepPanel ). Move ui-kit-libary-in-development to node_modules of main project and remove node_modules from ui-kit-libary-in-development (if don't remove, nothing will change). Unfortunately, I could not reproduce this problem. For some reason below try of reproduction works without errors: MainProject/src/Application.vue <template lang="pug"> PageOne</template><script lang="ts"> import { Vue, Component } from 'vue-property-decorator'; import PageOne from './PageComponents/PageOne' @Component({ components: { PageOne }}) export default class Application extends Vue { private created(): void { console.log('Done.'); } }</script> MainProject/src/PageComponents/PageOne.ts import { Vue, Component, Prop } from 'vue-property-decorator';import template from './PageOne.pug';import Button from './../../../UiKitLibraryStillInDevelopment/UiComponents/Buttons/Button.vue';@Component({ template, components: { Button } })export default class SkipProjectInitializationStepPanel extends Vue {} MainProject/src/PageComponents/PageOne.pug .RootElement Button(:text="'Click me'") ui-kit-libary-in-development/UiComponents/Buttons/Button.vue <template lang="pug"> button {{ text }}</template><script lang="ts"> import { Vue, Component, Prop } from 'vue-property-decorator'; @Component export default class SimpleCheckbox extends Vue { @Prop({ type: String, required: true }) private readonly text!: string; private created(): void { console.log('OK!'); console.log(this.$props); } }</script> All clues what I found is this comment in issue Setting components in Component decorator causes Typescript 2.4 error : Side components should add .d.ts for it to work AFAIK. Nick Messing From this clue, the following question arising: Where I should to create .d.ts - in my main project or dependency? Most likely in main project, but if it so, why I can import side components in third-party libraries like vuetify ? Because there is .d.ts there! How I need to declare new Vue component in .d.ts ? Some tutorial or example? Source files for bounty Because I could not reproduce this problem and my project still is raw (has not got commercial value yet), I can share it by Google Drive ( link for downloading zip archive ). All node_modules are included, just run npm run developmentBuild in main-project directory. If you are worry about potential viruses, you can also get source files in this repository , but because it is does not include node_modules , for reproducing it's required to execute npm install in both main-project and dependency directories. Now provide the response and nothing else.
This has become quite a long answer. If you don't have time to read it all there's a TL;DR at the end . Analysis Error Message At first I didn't really understand why TypeScript mentioning VueClass<Vue> and complaining about the template property. However, when I looked at the type definitions for the Component decorator, things became a bit clearer: vue-class-component/lib/index.d.ts (parts omitted) declare function Component<V extends Vue>(options: ComponentOptions<V> & ThisType<V>): <VC extends VueClass<V>>(target: VC) => VC;// ...declare function Component<VC extends VueClass<Vue>>(target: VC): VC; What we can see here is that Component has two signatures. The first one to be used like you did, and the second one without options ( @Component class Foo ). Apparently the compiler thinks our usage doesn't match the first signature so it must be the second one. Therefore we end up with an error message with VueClass<Vue> . Note: In the latest version (3.6.3), TypeScript will actually display a better error, stating both overloads and why they don't match. Better Error Message The next thing I did was temporarily comment out the second function declaration in main-project/node_modules/vue-class-component and sure enough we get a different error message. The new message spans 63 lines so I figured it wouldn't make sense to include it in this post as a whole. ERROR in /tmp/main-project/InitializeProjectGUI__assets/SingletonComponents/SkipProjectInitializationStepPanel/SkipProjectInitializationStepPanel.ts../InitializeProjectGUI__assets/SingletonComponents/SkipProjectInitializationStepPanel/SkipProjectInitializationStepPanel.ts[tsl] ERROR in /tmp/main-project/InitializeProjectGUI__assets/SingletonComponents/SkipProjectInitializationStepPanel/SkipProjectInitializationStepPanel.ts(10,24) TS2322: Type '{ SimpleCheckbox: typeof SimpleCheckbox; }' is not assignable to type '{ [key: string]: VueConstructor<Vue> | FunctionalComponentOptions<any, PropsDefinition<any>> | ComponentOptions<never, any, any, any, any, Record<string, any>> | AsyncComponentPromise<any, any, any, any> | AsyncComponentFactory<...>; }'. Property 'SimpleCheckbox' is incompatible with index signature. Type 'typeof SimpleCheckbox' is not assignable to type 'VueConstructor<Vue> | FunctionalComponentOptions<any, PropsDefinition<any>> | ComponentOptions<never, any, any, any, any, Record<string, any>> | AsyncComponentPromise<any, any, any, any> | AsyncComponentFactory<...>'. Type 'typeof SimpleCheckbox' is not assignable to type 'VueConstructor<Vue>'. Types of property 'extend' are incompatible. Type '{ <Data, Methods, Computed, PropNames extends string = never>(options?: import("/tmp/dependency/node_modules/vue/types/options").ThisTypedComponentOptionsWithArrayProps<import("/tmp/depende...' is not assignable to type '{ <Data, Methods, Computed, PropNames extends string = never>(options?: import("/tmp/main-project/node_modules/vue/types/options").ThisTypedComponentOptionsWithArrayProps<import("/tmp/main-...'. ... Type 'import("/tmp/dependency/node_modules/vue/types/vnode").ScopedSlotReturnValue' is not assignable to type 'import("/tmp/main-project/node_modules/vue/types/vnode").ScopedSlotReturnValue'. Type 'VNode' is not assignable to type 'ScopedSlotReturnValue'. As you can see the error message is quite hard to read and doesn't really point to a specific problem. So instead I went ahead and started reducing the complexity of the project in order to understand the issue better. Minimal Example I'll spare you the whole process which involved lots of trial and error. Let's jump directly to the result. Structure β”œβ”€ main-projectβ”‚ β”œβ”€ node_modules // installed packages listed below (without sub-dependencies)β”‚ β”‚ ts-loader@6.1.0β”‚ β”‚ typescript@3.6.3β”‚ β”‚ vue@2.6.10β”‚ β”‚ vue-property-decorator@8.2.2β”‚ β”‚ vuex@3.1.1β”‚ β”‚ webpack@4.40.2β”‚ β”‚ webpack-cli@3.3.9β”‚ β”œβ”€ SkipProjectInitializationStepPanel.tsβ”‚ β”œβ”€ tsconfig.jsonβ”‚ └─ webpack.config.js└─ dependency β”œβ”€ node_modules // installed packages listed below (without sub-dependencies) β”‚ vue@2.6.10 └─ SimpleCheckbox.ts main-project/tsconfig.json { "compilerOptions": { "target": "es6", "strict": true, "moduleResolution": "node" }} main-project/webpack.config.js : module.exports = { entry: './SkipProjectInitializationStepPanel.ts', mode: 'development', module: { rules: [ { test: /\.ts$/, loader: 'ts-loader' } ] }, resolve: { extensions: ['.ts', '.js'] }}; main-project/SkipProjectInitializationStepPanel.ts : import { Component } from 'vue-property-decorator';import 'vuex';import SimpleCheckbox from '../dependency/SimpleCheckbox';Component({ template: '', components: { SimpleCheckbox } }); dependency/SimpleCheckbox.ts : import Vue from 'vue';export default class SimpleCheckbox extends Vue {} When running this example from main-project with npm run webpack , we get the exact same error as before. Mission accomplished. What's Happening? While I was removing parts of the project to get it down to this minimal example, I learned something very interesting. You might have noticed the import 'vuex' I've added to SkipProjectInitializationStepPanel.ts . In the original project vuex is of course imported from different places (e.g. main-project/Source/ProjectInitializer/Store/Store.ts ) but that's not important for reproducing the issue. The crucial part is that main-project imports vuex and dependency doesn't . To find out why importing vuex causes this issue, we have to look at the type definitions of vuex . vuex/types/index.d.ts (first few lines) import _Vue, { WatchOptions } from "vue";// augment typings of Vue.jsimport "./vue";import { mapState, mapMutations, mapGetters, mapActions, createNamespacedHelpers } from "./helpers"; Here we are interested in the second import . The comment actually gives us a hint already: augment typings of Vue.js . vuex/types/vue.d.ts (parts omitted) import { Store } from "./index";// ...declare module "vue/types/vue" { interface Vue { $store: Store<any>; }} And here we have the culprit. The vuex types make use of declaration merging to add $store to the Vue interface of vue . It seems that this augmentation only happens to "local" types in the same node_modules as vuex . Because main-project has the augmented Vue and dependency has the original one, the two don't match. TS Loader During all my testing I couldn't reproduce the problem with just tsc . Apparently ts-loader does something differently causing this issue. It could have to do with it using Webpack for module resolution or it could be something entirely different. I don't know. Solution Approaches I have some ideas how this problem could be solved or worked around. Although these are not necessarily ready-to-use solutions but rather different approaches and ideas. Add vuex to dependency As removing vuex from main-project isn't really an option, the only thing we can do to make both Vue interfaces match, is include vuex in dependency as well. The odd thing here is that I was able to get this fix working in my minimal example but not in the original project. I haven't figured out why that is. In addition to that, it's not very elegant and you might have to import vuex from every file that you reference from main-project . Use a shared node_modules Having a shared node_modules folder means both projects use the same vue so this problem goes away. Depending on your requirements it might be a good solution to organize the two projects in a way that they share the same node_modules folder. You might also want to take a look at tools like Lerna or Yarn workspaces which can help with this. Consume dependency as a package You say that dependency is not [an] npm-dependency yet . Maybe it's time to make it one. Similarly to a shared node_modules directory this would result in both projects using the same vue installation which should fix the issue. Investigate further As mentioned before, this only happens with ts-loader . Maybe there is something that can be fixed or configured in ts-loader to avoid this problem. TL;DR main-project imports vuex while dependency doesn't. vuex augments the Vue interface using declaration merging adding a $store property to it. Now the Vue from one project doesn't match Vue from other causing an error.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57355411', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4818123/']}
jdg_373440