ManojAlexender's picture
Upload folder using huggingface_hub
b7f66fc verified
{"idx": 578, "commit_message": "Initial provider tests (#1794) * Initial provider tests * Added VRC and wrapt (dependency) libs, added basic search test, initial use of cassettes instead of html, updated yaml test structure * Rebased, fixed tests, added tests for zooqle and thepiratebay * Update test_search * Updated vcrpy to v1.10.5 * Added docstring * rm extratorrent * Add pubdate to tests * Fix post-processing ignoring files in subdirectories * Improve logging for post_processor * Add anidex & fix pubdate tests * Add vcrpy to requirements.txt * Added torrentz2 * Fix elitetorrent & add tests * Added tests for horriblesubs * Added limetorrents test & removed dos * Rewrite newptc and add tests, some other fixes to providers * Fix newpct for guessit * Added shanaproject * Set newpct as public * Added tokyotoshokan * flake * Fix torrent9 search and add tests * Added rarbg tests * Added nyaa tests", "target": 0}
{"idx": 3849, "commit_message": "improve ColorFilter performance", "target": 1}
{"idx": 1318, "commit_message": "Lots of networking fixes, especially with predictions and the like. At least interpolation goes the correct direction!", "target": 0}
{"idx": 2371, "commit_message": "Drop the use of Splay many places - should improve performance!", "target": 1}
{"idx": 2793, "commit_message": "Performance and debug improvements", "target": 1}
{"idx": 146, "commit_message": "improved Rbind, added Bugs and Tricks", "target": 0}
{"idx": 563, "commit_message": "Worked on logging improvements", "target": 0}
{"idx": 4086, "commit_message": "[PATCH] allows requesting of multiple moments. restructures blocks\n for performance improvements", "target": 1}
{"idx": 207, "commit_message": "New appended files are now searched with taglib. Improved file manager dialog. New default location for music collection.", "target": 0}
{"idx": 2725, "commit_message": "Motivation `MetricsBucketedHistogram` have a larger memory footprint than necessary. Solution Store the data for the previous window's snapshot in a mutable struct that can be used to create an immutable `Snapshot` as needed. Result Memory footprint drops to 7.4 Kb from 14.5 Kb per instance. Due to the change in synchronization, `add` improves in performance. RB_ID=733126", "target": 1}
{"idx": 2943, "commit_message": "Merge branch 'newtypeUID2' into performanceSpdFix", "target": 0}
{"idx": 3890, "commit_message": "Optimize the .travis.yml 1) Enable caching for node_modules (see [URL]/2013-12-05-speed-up-your-builds-cache-your-dependencies/) 2) Run 'npm install' only once, to save a few milliseconds of node startup time", "target": 1}
{"idx": 3538, "commit_message": "#26 cache the value for better performance.", "target": 1}
{"idx": 2280, "commit_message": "Implement a kickass vectorized nearest neighbor search to improve analogy performance at the expensive of heavier memory usage. But does it make toast?", "target": 1}
{"idx": 1912, "commit_message": "Revert of Enabled gpu_times metrics. (patchset #1 id:1 of [URL]/937073003/) Reason for revert: Causing not redness. Original issue's description: > Enabled gpu_times metrics. > > The issue where gpu_times would use too much memory has been fixed: > [URL]/920523002 > > In order to prepare for future changes where additional metrics > will be added to the TBM module, I have also preemptively added a > filter function which filters out values relating to gpu_times. > > [URL] > BUG=455292, 453131 > TEST=local runs measuring memory usage shows ~700mb > > Committed: [URL]/ed5b27a7459e35248992fd5549b72b5731a8baa4 >", "target": 0}
{"idx": 946, "commit_message": "*** EFM32 branch *** 1. Remove \"RT_USING_NETUTILS\" defined in \"rtconfig.h\" to avoid compiling error - Warning: Due to the memory limitation, running lwIP on Gecko devices (EFM32G) is not properly tested. - Warning: To test lwIP, please revert the file, \"components etlwipsrcarchsys_arch.c\", to revision 1620. There is a runtime error when working with the latest revision of that file. This error is currently under evaluation.", "target": 0}
{"idx": 2700, "commit_message": "Don't use HTMLPurifier in list views to improve performance refs #9036", "target": 1}
{"idx": 3744, "commit_message": "rwsem: check counter to avoid cmpxchg calls This patch tries to reduce the amount of cmpxchg calls in the writer failed path by checking the counter value first before issuing the instruction. If ->count is not set to RWSEM_WAITING_BIAS then there is no point wasting a cmpxchg call. Furthermore, Michel states \"I suppose it helps due to the case where someone else steals the lock while we're trying to acquire sem->wait_lock.\" Two very different workloads and machines were used to see how this patch improves throughput: pgbench on a quad-core laptop and aim7 on a large 8 socket box with 80 cores. Some results comparing Michel's fast-path write lock stealing (tps-rwsem) on a quad-core laptop running pgbench: | db_size | clients | tps-rwsem | tps-patch | +---------+----------+----------------+--------------+ | 160 MB | 1 | 6906 | 9153 | + 32.5 | 160 MB | 2 | 15931 | 22487 | + 41.1% | 160 MB | 4 | 33021 | 32503 | | 160 MB | 8 | 34626 | 34695 | | 160 MB | 16 | 33098 | 34003 | | 160 MB | 20 | 31343 | 31440 | | 160 MB | 30 | 28961 | 28987 | | 160 MB | 40 | 26902 | 26970 | | 160 MB | 50 | 25760 | 25810 | ------------------------------------------------------ | 1.6 GB | 1 | 7729 | 7537 | | 1.6 GB | 2 | 19009 | 23508 | + 23.7% | 1.6 GB | 4 | 33185 | 32666 | | 1.6 GB | 8 | 34550 | 34318 | | 1.6 GB | 16 | 33079 | 32689 | | 1.6 GB | 20 | 31494 | 31702 | | 1.6 GB | 30 | 28535 | 28755 | | 1.6 GB | 40 | 27054 | 27017 | | 1.6 GB | 50 | 25591 | 25560 | ------------------------------------------------------ | 7.6 GB | 1 | 6224 | 7469 | + 20.0% | 7.6 GB | 2 | 13611 | 12778 | | 7.6 GB | 4 | 33108 | 32927 | | 7.6 GB | 8 | 34712 | 34878 | | 7.6 GB | 16 | 32895 | 33003 | | 7.6 GB | 20 | 31689 | 31974 | | 7.6 GB | 30 | 29003 | 28806 | | 7.6 GB | 40 | 26683 | 26976 | | 7.6 GB | 50 | 25925 | 25652 | ------------------------------------------------------ For the aim7 worloads, they overall improved on top of Michel's patchset. For full graphs on how the rwsem series plus this patch behaves on a large 8 socket machine against a vanilla kernel: [URL]/rwsem-aim7-results.tar.gz", "target": 1}
{"idx": 574, "commit_message": "Added description to StructuredCoalescentTree and corrected times to be positive. Improved example figure (in beast-graphics) to draw 6 random structured coalescent trees on the same individuals and the same time scale.", "target": 0}
{"idx": 314, "commit_message": "improve logging in pcsd", "target": 0}
{"idx": 2736, "commit_message": "sched/fair: Add irq load awareness to the tick CPU selection logic IRQ load is not taken into account when determining whether a task should be migrated to a different CPU. A task that runs for a long time could get stuck on CPU with high IRQ load causing degraded performance. Add irq load awareness to the tick CPU selection logic. CRs-fixed: 809119", "target": 0}
{"idx": 2065, "commit_message": "New VIsual C# Version Better Performance , Real Random Interval , Fix Bug , User can now edit any message they want. More Realistic Icon From Iconarchive", "target": 1}
{"idx": 2807, "commit_message": "Optimize performance of API asset lists improving endpoint response time by roughly 10x. Fixes #1422", "target": 1}
{"idx": 2278, "commit_message": "- Patch #100563 by m3avrck: improved phpDoc of drupal_add_css.", "target": 0}
{"idx": 1076, "commit_message": ":lipstick: improve comments", "target": 0}
{"idx": 304, "commit_message": "file_block_map_memory: turn the fileBlockMap into an interface A future commit will provide a disk-cache-backed implementation of this interface. Issue: KBFS-3852", "target": 0}
{"idx": 1241, "commit_message": "Actually handle GetDevices in networkserver", "target": 0}
{"idx": 3499, "commit_message": "Make reading maildirs more efficient.", "target": 1}
{"idx": 2025, "commit_message": "optimize update_performance", "target": 1}
{"idx": 2474, "commit_message": "Make sure the dictionary buffers are NIO-based for better performance under load.", "target": 1}
{"idx": 1682, "commit_message": "making opencv cam networked", "target": 0}
{"idx": 1143, "commit_message": "improve method_missing", "target": 0}
{"idx": 1646, "commit_message": "Improved Tuning doctests.", "target": 0}
{"idx": 3580, "commit_message": "Avoid round trip to NSString to decrease excessive memory consumption for performance tests with a factor of 100x (the string leaked, could perhaps be fixed differently, but this fix seems good). Benchmark results when running Eager run only, 1M iterations Before optimization, peak RAM usage 650MB at end of test After optimization, peak RAM usage 6,3MB at end of test Performance before: ================================= 23545 ms encode 3222 ms decode 1183 ms use 1642 ms dealloc 6047 ms decode+use+dealloc ================================= Performance after: ================================= 20589 ms encode 2681 ms decode 1002 ms use 1446 ms dealloc 5128 ms decode+use+dealloc =================================", "target": 1}
{"idx": 1649, "commit_message": "ImageDownload rewritten using QNetworkAccessManager. Images cache moved from Core to ImageDownload.", "target": 0}
{"idx": 3650, "commit_message": "Small performance improvement for solvers", "target": 1}
{"idx": 4080, "commit_message": "[PATCH] Adds sfence and lfence options if user builds with assembly\n to enable more efficient use of out of order engines", "target": 1}
{"idx": 3216, "commit_message": "Implement SimpleLimit which is more efficient than regular Limit.", "target": 1}
{"idx": 3721, "commit_message": "I have renamed 'NewCoalescentLikelihood' as 'CoalescentLikelihood' to clarify things a bit. NewCoalescentLikelihood was more efficient and clean implementation and it's parser was installed as the default 'coalescentLikelihood' anyway since BEAST v1.4.x. I have renamed 'CoalescentLikelihood' to AbstractCoalescentLikelihood and removed the parser (made it abstract). This class is used as a base by a number of other classes (i.e., BayesianSkylineLikelihood). I guess that ideally, we see what of AbstractCoalescentLikelihood the descendents actually use and strip out some of it.", "target": 1}
{"idx": 2237, "commit_message": "Merge branch '2.8' into 3.0 * 2.8: Fix merge [Process] Fix running tests on HHVM>=3.8 [Form] Improved performance of ChoiceType and its subtypes Removed an object as route generator argument Conflicts: src/Symfony/Bridge/Doctrine/Tests/Form/Type/EntityTypeTest.php src/Symfony/Bundle/FrameworkBundle/Resources/config/form.xml", "target": 1}
{"idx": 1462, "commit_message": "Don't use inefficient strlen to get number of bytes read", "target": 1}
{"idx": 3706, "commit_message": "SELinux: reduce overhead of mls_level_isvalid() function call Date\tWed, 10 Apr 2013 14:26:22 -0400 While running the high_systime workload of the AIM7 benchmark on a 2-socket 12-core Westmere x86-64 machine running 3.8.2 kernel, it was found that a pretty sizable amount of time was spent in the SELinux code. Below was the perf trace of the \"perf record -a -s\" of a test run at 1500 users: 3.96% ls [kernel.kallsyms] [k] ebitmap_get_bit 1.44% ls [kernel.kallsyms] [k] mls_level_isvalid 1.33% ls [kernel.kallsyms] [k] find_next_bit The ebitmap_get_bit() was the hottest function in the perf-report output. Both the ebitmap_get_bit() and find_next_bit() functions were, in fact, called by mls_level_isvalid(). As a result, the mls_level_isvalid() call consumed 6.73% of the total CPU time of all the 24 virtual CPUs which is quite a lot. Looking at the mls_level_isvalid() function, it is checking to see if all the bits set in one of the ebitmap structure are also set in another one as well as the highest set bit is no bigger than the one specified by the given policydb data structure. It is doing it in a bit-by-bit manner. So if the ebitmap structure has many bits set, the iteration loop will be done many times. The current code can be rewritten to use a similar algorithm as the ebitmap_contains() function with an additional check for the highest set bit. With that change, the perf trace showed that the used CPU time drop down to just 0.09% of the total which is about 100X less than before. 0.04% ls [kernel.kallsyms] [k] ebitmap_get_bit 0.04% ls [kernel.kallsyms] [k] mls_level_isvalid 0.01% ls [kernel.kallsyms] [k] find_next_bit Actually, the remaining ebitmap_get_bit() and find_next_bit() function calls are made by other kernel routines as the new mls_level_isvalid() function will not call them anymore. This patch also improves the high_systime AIM7 benchmark result, though the improvement is not as impressive as is suggested by the reduction in CPU time. The table below shows the performance change on the 2-socket x86-64 system mentioned above. +--------------+---------------+----------------+-----------------+ | Workload | mean % change | mean % change | mean % change | | | 10-100 users | 200-1000 users | 1100-2000 users | +--------------+---------------+----------------+-----------------+ | high_systime | +0.2% | +1.1% | +2.4% | +--------------+---------------+----------------+-----------------+", "target": 1}
{"idx": 3290, "commit_message": "Bug#17400029 - UNDERCOUNTED FREE OPERATIONS IN PERFORMANCE SCHEMA MEMORY INSTRUMENTATION In the performance schema memory instrumentation, statistics collected about FREE operations could be under evaluated, leading to a perceived memory leak. The problem was that when a memory block was counted as allocated, it was not always counted as freed, because the logic in memory_free_v1() and memory_realloc_v1() tested too many flags. During a call to memory_free_v1(), the value of: - the global_instrumentation consumer - the thread_instrumentation consumer - the thread.INSTRUMENTED column - the presence of thread instrumentation may affect *how* free statistics are collected, for example accounted per thread or globally, but it should not affect *if* free statistics are collected. With this fix, there is no logic affecting *if* free statistics are collected. Every memory block counted as allocated is then counted as freed, when invoking the instrumentation for the free (or realloc) operation. This fix also renamed the memory instrumentation apis.", "target": 0}
{"idx": 4064, "commit_message": "[PATCH] replaced std::endl with \\n in all file IO and stringstreams. \n std::endl forces a flush, which kills performance on some machines\n\ngit-svn-id: file:///Users/petejw/Documents/libmesh_svn_bak@834 434f946d-2f3d-0410-ba4c-cb9f52fb0dbf", "target": 1}
{"idx": 3267, "commit_message": "rawlog: Buffer writing to rawlog files to improve performance.", "target": 1}
{"idx": 2590, "commit_message": "net: mvneta: set real interrupt per packet for tx_done [ Upstream commit 06708f81528725148473c0869d6af5f809c6824b ] Commit aebea2ba0f74 (\"net: mvneta: fix Tx interrupt delay\") intended to set coalescing threshold to a value guaranteeing interrupt generation per each sent packet, so that buffers can be released with no delay. In fact setting threshold to '1' was wrong, because it causes interrupt every two packets. According to the documentation a reason behind it is following - interrupt occurs once sent buffers counter reaches a value, which is higher than one specified in MVNETA_TXQ_SIZE_REG(q). This behavior was confirmed during tests. Also when testing the SoC working as a NAS device, better performance was observed with int-per-packet, as it strongly depends on the fact that all transmitted packets are released immediately. This commit enables NETA controller work in interrupt per sent packet mode by setting coalescing threshold to 0.", "target": 1}
{"idx": 3965, "commit_message": "Cleanup of testForward. Updated pendulum test so that it has a real comparison with tight tolerance. Added simple test cases for the application of external loads by the ForwardTool. Removed all the generated results from the repository and renamed standard solution with prefix std_. Simulations updated to use recommended integration settings that take advantage of the variable step integration for better performance. Earlier settings were forcing the integration to artificially small time steps and were taking excruciatingly long to run, even in release. Ground reaction force file was also trimmed since padding and splining 9009 rows was taking a bulk of simulation time.", "target": 1}
{"idx": 1718, "commit_message": "Myaso::PiTable is representing the dynamic programming table", "target": 0}
{"idx": 2294, "commit_message": "[Java] Use Agrona DirectBuffer.getStringWithoutLengthAscii to Appendable for more efficient reading of var data and return length copied.", "target": 1}
{"idx": 3125, "commit_message": "denoiseprofile: variance computation This implementation is not parallel, is recursive, and is only for CPU. This is not ideal as far as performance are concerned, but it runs already fast enough, and we should keep this implementation as simple as possible because it will be used to check that profiles are correct, thus we need this implementation to be as reliable as possible. The variance is computed in 2 steps: the mean is estimated first, then the variance. As both mean and variance are estimated from the data, the denominator for variance is the number of pixel minus 1 (unbiased variance estimator). Both of them are computed using a divide-and-conqueer approach. The goal is to sum quantities that are of the same order, to avoid problems due to float limited precision.", "target": 0}
{"idx": 742, "commit_message": "New API - improve SimpleUser.hangup (and fix a bug)", "target": 0}
{"idx": 2995, "commit_message": "proto_sched: minor performance improvements", "target": 1}
{"idx": 692, "commit_message": "Improve Circularize Action", "target": 0}
{"idx": 2268, "commit_message": "improve shutdown performance of ui", "target": 1}
{"idx": 838, "commit_message": "nfs: fix pnfs direct write memory leak commit 8c393f9a721c30a030049a680e1bf896669bb279 upstream. For pNFS direct writes, layout driver may dynamically allocate ds_cinfo.buckets. So we need to take care to free them when freeing dreq. Ideally this needs to be done inside layout driver where ds_cinfo.buckets are allocated. But buckets are attached to dreq and reused across LD IO iterations. So I feel it's OK to free them in the generic layer.", "target": 0}
{"idx": 636, "commit_message": "kmsvp8parse: fix GstCaps memory leaks", "target": 0}
{"idx": 1332, "commit_message": "TComPicYuv: remove getLumaFilterBlock() methods, generate on demand in subpel TEncSearch::xPatternRefinement() is only used for bidir refinement, and is on the short-list to be removed once bidir is optimized.", "target": 0}
{"idx": 2607, "commit_message": "EasiOS v0.3.2 Eelphant: * Add window controlling keys * Add date * Add terminal window * Add kshell * Add VBoxVideo driver * Add EHCI driver stub * Kernel now sets up virtual text-mode buffer when the system is in graphical mode * Add DMA stub * Improve stdlibs * Bug fixes and performance improvements * Removed Herobrine", "target": 1}
{"idx": 1283, "commit_message": "Trainers: improve python bindings (#151)", "target": 0}
{"idx": 1594, "commit_message": "Figured out how to dynamically adjust memory width", "target": 0}
{"idx": 3157, "commit_message": "Added Debian support. This code is another roar-ng port, but convert_package_list is much more efficient.", "target": 1}
{"idx": 631, "commit_message": "Minor improvements in asciidoc for version 5.2.0", "target": 0}
{"idx": 3447, "commit_message": "More efficient init script.", "target": 1}
{"idx": 1931, "commit_message": "Reactivate \"get spell check list when we right click\" Work just with aspell for the moment. API must be improved", "target": 0}
{"idx": 4078, "commit_message": "[PATCH] Replacing dynamic_cast with libmesh_cast where appropriate -\n depending on the error checking replaced this will either lead to slightly\n more efficient NDEBUG runs or slightly more run-time checking in debug mode\n runs.", "target": 1}
{"idx": 1763, "commit_message": "libvortex-1.1: * [fix] More fixings (memory leaks fixes) due to vortex sequencer rewrite.", "target": 0}
{"idx": 505, "commit_message": "Use stdin reflist passing in parse-remote Use the new stdin reflist passing mechanism for the call to fetch--tool expand-refs-wildcard, allowing passing of more than ~128K of reflist data.", "target": 0}
{"idx": 1336, "commit_message": "Linux sandbox: Restrict sched_* syscalls on the GPU and ppapi processes. BUG=399473,413855 Review URL: [URL]/598203004", "target": 0}
{"idx": 2847, "commit_message": "More work on the change events feature for AtomHashMap and Ref [URL]/louthy/language-ext/issues/923 A more optimised version for the STM, which continues to use TrieMap internally, but is able to publish the changes once the transaction has successfully completed. More refinement of the TrackingHashMap, which is primarily used by Swap in the AtomHashMap. - New Snapshot method that creates a 'zero changes' state, which is the start of a new snapshot of changes - Improved documentation Created a new HashMapPatch which carries the details of a change to an AtomHashMap. It works from the PoV that some of the values won't ever be accessed, and so doesn't do the work of wrapping them up until needed. This will have some marginal performance gains. Refactored the Change union, to only have four cases: - EntryAdded - EntryMapped - EntryRemoved - NoChange Added extra Rx observable extensions: - OnChange for Ref, it just streams the latest value - OnChange for AtomHashMap, it streams the (last HashMap snapshot, the current HashMap snapshot, and the HashMap of changes) - OnMapChange for AtomHashMap, it just exposes the latest HashMap snapshot - OnEntryChange for AtomHashMap, it just exposes the (key, value) changes", "target": 1}
{"idx": 2979, "commit_message": "Improve multiply performance The main idea here is to do as much as possible with slices, instead of allocating new BigUints (= heap allocations). Current performance: multiply_0:\t10,507 ns/iter (+/- 987) multiply_1:\t2,788,734 ns/iter (+/- 100,079) multiply_2:\t69,923,515 ns/iter (+/- 4,550,902) After this patch, we get: multiply_0:\t364 ns/iter (+/- 62) multiply_1:\t34,085 ns/iter (+/- 1,179) multiply_2:\t3,753,883 ns/iter (+/- 46,876)", "target": 1}
{"idx": 3362, "commit_message": "major performance optimizations", "target": 1}
{"idx": 469, "commit_message": "Merge pull request #100 from vistadataproject/refactorGMVParam Optimize code", "target": 0}
{"idx": 3147, "commit_message": "Implemented more efficient Buchberger algorithm The new algorithm returns the same type of basis as the old algorithm ie. reduced, unique and with monic generators; however the internal design is completely different. First of all new algorithm uses sets, rather than lists. Secondly, before the main loop and after each reduction to zero of an s-poly, the basis is reduced and the critical pairs set is regenerated. The algorithm could be improved by using sugar flavor pair selection strategy rather than minimum LCM strategy. Here is an initeresting example taken from \"One sugar cube please\": (caching: on) In [1]: u,t = symbols('ut') In [2]: f = 35*y**4 - 30*x*y**2 - 210*y**2*z + 3*x**2 + 30*x*z - 105*z**2 + 140*y*t - 21*u In [3]: g = 5*x*y**3 - 140*y**3*z + 45*x*y*z - 420*y*z**2 + 210*y**2*t - 25*x*t + 70*z*t + 126*y*u (first pass) In [4]: time G = poly_groebner((f, g), x,y,z,t,u, order='grevlex') CPU times: user 8.36 s, sys: 0.08 s, total: 8.45 s Wall time: 8.57 s In [6]: time GG = groebner([f, g], [x,y,z,t,u], order='grevlex') CPU times: user 36.63 s, sys: 0.34 s, total: 36.96 s Wall time: 37.54 s (second pass) In [8]: time G = poly_groebner((f, g), x,y,z,t,u, order='grevlex') CPU times: user 0.92 s, sys: 0.01 s, total: 0.93 s Wall time: 0.94 s In [10]: time GG = groebner([f, g], [x,y,z,t,u], order='grevlex') CPU times: user 6.29 s, sys: 0.04 s, total: 6.33 s Wall time: 6.41 s (caching: off) In [1]: u,t = symbols('ut') In [2]: f = 35*y**4 - 30*x*y**2 - 210*y**2*z + 3*x**2 + 30*x*z - 105*z**2 + 140*y*t - 21*u In [3]: g = 5*x*y**3 - 140*y**3*z + 45*x*y*z - 420*y*z**2 + 210*y**2*t - 25*x*t + 70*z*t + 126*y*u In [4]: time G = poly_groebner((f, g), x,y,z,t,u, order='grevlex') CPU times: user 9.88 s, sys: 0.12 s, total: 10.01 s Wall time: 10.19 s", "target": 1}
{"idx": 1495, "commit_message": "Merge branch 'android-omap-3.0' into p-android-omap-3.0 * android-omap-3.0: (434 commits) net: wireless: bcmdhd: Fix private command output netfitler: xt_qtaguid: add another missing spin_unlock. Linux 3.0-rc6 RDMA: Check for NULL mode in .devnode methods AT91: Change nand buswidth logic to match hardware default configuration vesafb: fix memory leak [SCSI] isci: fix checkpatch errors hwmon: (k10temp) Update documentation for Fam12h hwmon-vid: Fix typo in VIA CPU name hwmon: (f71882fg) Add support for the F71869A hwmon: Use <> rather than () around my e-mail address hwmon: (emc6w201) Properly handle all errors isci: Device reset should request sas_phy_reset(phy, true) isci: pare back error messsages isci: cleanup silicon revision detection isci: merge scu_unsolicited_frame.h into unsolicited_frame_control.h isci: merge sata.[ch] into request.c isci: kill 'get/set' macros isci: retire scic_sds_ and scic_ prefixes isci: unify isci_host and scic_sds_controller ...", "target": 0}
{"idx": 3865, "commit_message": "Merge pull request #265 from manuyavuz/improvement/searchPerformance Changes search index format to fasten `pod search --full` command.", "target": 1}
{"idx": 823, "commit_message": "Fixed a couple memory leaks. Yay build and analyze.", "target": 0}
{"idx": 3016, "commit_message": "Improve performance of factoriser on large numbers. Monotone-Parent: 5375b0fd2bc8cbb4add15df94aa9d66d730d03cf Monotone-Revision: 64ebd6a94f36fb9decfb3b24e521d60b90cb01de", "target": 1}
{"idx": 4112, "commit_message": "[PATCH] Sparse refactored readers: disable filtered buffer tile\n cache. (#2702)\n\nFrom tests, it's been found that writing the cache for the filter\npipeline takes a significant amount of time for the tile unfiltering\noperation. For example, 2.25 seconds with and 1.88 seconds without in\nsome cases. The cache improved performance before multi-range subarrays\nwere implemented, so dropping it is fine at least for the refactored\nreaders.", "target": 1}
{"idx": 528, "commit_message": "New topologic algorithm.", "target": 0}
{"idx": 1707, "commit_message": "chore(Errors): Slighly improve descriptions of errors shown to the users (#2293)", "target": 0}
{"idx": 3224, "commit_message": "Merge branch 'seqbatch' I've tested this, and it gets the same accuracy as the \"thorough\" randomization, but with better performance on EC2 due to fewer random accesses (that is, through the memmap)", "target": 1}
{"idx": 1864, "commit_message": "Make VectorConstantCoefficient::GetVec const (const-correctness)", "target": 0}
{"idx": 2282, "commit_message": "mm: disable zone_reclaim_mode by default When it was introduced, zone_reclaim_mode made sense as NUMA distances punished and workloads were generally partitioned to fit into a NUMA node. NUMA machines are now common but few of the workloads are NUMA-aware and it's routine to see major performance degradation due to zone_reclaim_mode being enabled but relatively few can identify the problem. Those that require zone_reclaim_mode are likely to be able to detect when it needs to be enabled and tune appropriately so lets have a sensible default for the bulk of users. This patch (of 2): zone_reclaim_mode causes processes to prefer reclaiming memory from local node instead of spilling over to other nodes. This made sense initially when NUMA machines were almost exclusively HPC and the workload was partitioned into nodes. The NUMA penalties were sufficiently high to justify reclaiming the memory. On current machines and workloads it is often the case that zone_reclaim_mode destroys performance but not all users know how to detect this. Favour the common case and disable it by default. Users that are sophisticated enough to know they need zone_reclaim_mode will detect it.", "target": 0}
{"idx": 600, "commit_message": "Improve appearance of markers and paths 1. Markers are now yellow points 2. Trajectories are blue lines 3. Scaled models of bodies are as they were 4. Classes have been named more descriptively 5. Much of the code has been split out into .cpp files 5. The colors are set in style.hpp for convenient adjustment The display looks much more elegant now. The earlier markers, which were circles with a small non-zero size, looked clunky. Making them points made busy displays, such as the asteroids scenario, look really nice. The problem with the points was that we no longer got the nice markers gliding over the paths when we advanced time. Making the markers yellow and the paths blue makes the points stand out and now the animation looks quite nice. In a busy scene like the asteroids we get a nicely visible cloud of yellow points over a blue background.", "target": 0}
{"idx": 808, "commit_message": "[CrOS cellular] Update Cellular Network setup UI - Romoved duplicated string - Updated base page UI Screenshots: - [URL]/OZPqyfj - [URL]/OGFU3fI - [URL]/jDLcIQu - [URL]/Lndkx7C - [URL]/ckyhXWunKco7LHP.png Bug: 1093185", "target": 0}
{"idx": 465, "commit_message": "use ephemeral memory for buffers in receive functions Link: [URL]/GeneAssembly/biosal/issues/374", "target": 0}
{"idx": 1229, "commit_message": "main/k8s-pool-management/check-pool-size-after-hours build 64794 adding unclaimed: biweekly-memory", "target": 0}
{"idx": 1483, "commit_message": "8016446: Improve forEach/replaceAll for Map, HashMap, Hashtable, IdentityHashMap, WeakHashMap, TreeMap, ConcurrentMap", "target": 0}
{"idx": 1281, "commit_message": "lib/meta: add a *goret* approach to get the name of random types. may be used to improve deserialization?", "target": 0}
{"idx": 3460, "commit_message": "Merge pull request #2 from luzfcb/patch-1 small improvement in documentation", "target": 0}
{"idx": 3208, "commit_message": "Enable multiprocessor build Project now utilizes the /MP compiler switch to perform parallelized builds. The number of parallel builds performed is determined on a per-machine basis based on available logical CPUs. Long term this will provide the best performance output to code maintainability ratio compared to just enabling precompiled headers. Using my personal machine (8 cores), I got the following timings (Debug configuration): * Normal build : 89 seconds * Multi-processor build : 28 seconds * PCH enabled : 27 seconds Note that the multi-processor build timings can be further reduced with proper dependency management and removal of existing precompiled header file (precompiledHeaders.h). Specific Changes: * Precompiled header support disabled (not compatible with /MP flag). * precompiledHeader.cpp deleted. * Solution File added. * Minimal Rebuild (/Gm) disabled (ignored when /MP is on). precompiledHeaders.h still exists because it contains a ton of inclusions required by lots of files. A second and less trivial cleanup will involve removing the precompiledHeaders.h file and individually correcting and satisfying dependencies in each source file in the code base.", "target": 1}
{"idx": 2781, "commit_message": "* Performance optimization.", "target": 1}
{"idx": 965, "commit_message": "[Linter] Improved detection of xcodebuild errors.", "target": 0}
{"idx": 1265, "commit_message": "P4 to Git Change 1979444 by on 2019/08/07 10:43:18 \tSWDEV-197488 - [Navi10][Navi14][19.30][CopySurfaceRegion] CopySurfaceRegion Failing Multiple Tests \t- Try to optimize the condition for image buffer workaround. The new logic will attempt to validate the custom pitch with the HW requirement and use the backing store only if it doesn't match Affected files ... ... //depot/stg/opencl/drivers/opencl/runtime/device/pal/palmemory.cpp#28 edit ... //depot/stg/opencl/drivers/opencl/runtime/device/pal/palmemory.hpp#14 edit ... //depot/stg/opencl/drivers/opencl/runtime/device/pal/palresource.cpp#76 edit ... //depot/stg/opencl/drivers/opencl/runtime/device/pal/palresource.hpp#29 edit", "target": 0}
{"idx": 1457, "commit_message": "ENH: Add initial cmake build script (#51) CMake allows for more robust introspection of the compilation envirionment, and can provide improved build experience across differnt OS and compiler choices. Additionally, many editors can provide enhanced editing capabilities when configured with cmake. CMakeList.txt includes copyright statement Tries to use file introspection rather than explicit lists Reads files and does substitution rather than provideing config files.", "target": 0}
{"idx": 3529, "commit_message": "Merge commit : jesse | 2005-10-05 10:08:39 -0400 : jesse | 2005-10-05 09:37:42 -0400 (orig r3877): alexmv | 2005-09-22 15:09:22 -0400 : chmrr | 2005-09-22 15:08:37 -0400 * Add where the faulty caller was in deprecated warnings (orig r3892): robert | 2005-09-28 12:16:03 -0400 : rspier | 2005-09-28 09:15:08 -0700 Performance Improvement when Sending Email using sendmailpipe - MIME::Entity would bog down in certain cases because of it's use of IO::Scalar during stringification. MIME::Entity will be switching to IO::ScalarArray, which will help... but RT was causing it to store into a temporary string anyway, which was silly. This change has MIME::Entity write directly to the pipe, which is a lot more efficient. Seems to cut out ~33% of user time. (Because we don't need to have a temporary IO::Scalar thingy around.) Also will reduce peak memory usage. (orig r3893): jesse | 2005-09-28 13:27:29 -0400 Switch from ->CustomFields to ->TicketCustomFields to stop using a deprecated API. Thanks to T.J. Maciak (orig r3894): alexmv | 2005-09-30 15:19:46 -0400 : chmrr | 2005-09-30 15:16:47 -0400 * Remove unused and deprecated code path (bugs 6605, 7008) (orig r3895): alexmv | 2005-09-30 15:19:57 -0400 : chmrr | 2005-09-30 15:18:22 -0400 * Link to the *other* end of the link, not the one that is us (orig r3896): alexmv | 2005-09-30 15:56:31 -0400 : chmrr | 2005-09-30 15:56:06 -0400 RT-Ticket: 7029 RT-Status: resolved RT-Update: correspond * Applied missing limit for AdminCcs, from Todd Chapman (orig r3900): alexmv | 2005-10-03 13:32:45 -0400 : chmrr | 2005-10-03 13:28:24 -0400 * Updated spanish translation, thanks to Carlos Velasco (orig r3901): alexmv | 2005-10-03 14:15:35 -0400 : chmrr | 2005-10-03 14:14:49 -0400 * Header fixes in PO files to include correct RT version", "target": 1}
{"idx": 4019, "commit_message": "[PATCH] Improve the performance of the `Record` logger by using\n deques of `std::unique_ptr` instead of plain object.", "target": 1}
{"idx": 4034, "commit_message": "[PATCH] Removed unnecassary flush of trn,xtc,edr. Important for\n performance for very frequent writes (on small systems) Fixed a bug related\n to setting the duty of pp/pme/io", "target": 1}
{"idx": 2708, "commit_message": "[IMP] base/report, web/report: remove pdf merge with single call to wkhtmltopdf The old behavior was to call wkhtmltopdf for each report and to put all of them together using the merge_pdf method. It was a very slow approach because all report needs 5 temporary files, one subprocess and then, a merge. The new behavior is to perform a single call to wkhtmltopdf by putting all the footers/headers html together and so, avoid call to merge_pdf to greatly improve the performances of the reporting.", "target": 1}
{"idx": 1547, "commit_message": "Split ::optimizer into ::intraopt and ::interopt These two classes don't really have anything to do with each other, so let them live in different namespaces.", "target": 0}
{"idx": 2197, "commit_message": "#172840 by Crell: break up book module to improve performance", "target": 1}
{"idx": 1306, "commit_message": "platform: improve pin definitions", "target": 0}
{"idx": 3132, "commit_message": "mmc/core: disable crc to improve performance", "target": 1}
{"idx": 3617, "commit_message": "Use the fact that the AES-NI instructions can be pipelined to improve performance... Use SSE2 instructions for calculating the XTS tweek factor... Let the compiler do more work and handle register allocation by using intrinsics, now only the key schedule is in assembly... Replace .byte hard coded instructions w/ the proper instructions now that both clang and gcc support them... On my machine, pulling the code to userland I saw performance go from ~150MB/sec to 2GB/sec in XTS mode. GELI on GNOP saw a more modest increase of about 3x due to other system overhead (geom and opencrypto)... These changes allow almost full disk io rate w/ geli... Reviewed by:\t-current, -security Thanks to:\tMike Hamburg for the XTS tweek algorithm", "target": 1}
{"idx": 3521, "commit_message": "improve display_ascii() performance and options", "target": 1}
{"idx": 2346, "commit_message": "Lots more updates to Detector - Creating ngrams, saving in several ways - Function for jaccard coefficient calculation - #check_for_duplicates and #is_duplicate - Fs(x) function - Scaffold for calculating the sketches", "target": 0}
{"idx": 393, "commit_message": "Add EfficientNetV2 XL model defs", "target": 0}
{"idx": 2210, "commit_message": "Performance improvements", "target": 1}
{"idx": 2031, "commit_message": "distgit: allow '|' in patches_ignore regex While the change itself is one character, this patch adds comprehensive new-version feature test to ensure patches_ignore keeps working as expected. It also makes sure rdopkg always displays patches_ignore during patches update. With colors o/ Feature tests improvements: * allow creation of sample .spec with magic comments * allow creation of sample patches branch with arbitraty patch names * allow searching for specific strings in .spec * remove obsolete test step * better branch juggling with 'with git_branch(x):' Fixes: [URL]/softwarefactory-project/rdopkg/issues/153", "target": 0}
{"idx": 1476, "commit_message": "Fix and improve Sponge Getter filter support. What I made in the first PR (#488) was quite broken (hopefully it did not make it to stable). No exceptions are involved, just completion contribution to all annotations of the listeners parameters and a bad inspection that wasn't aware of generic types other than Optionals. The \"Wrong parameter type\" quickfix has been improved and now handles generics. Completion has been a bit improved (not perfect but better for sure) and now suggests a parameter name based on the method name the getter targets. I also tried to clean up some places in my previous code, but I'm sure there is more room for improvements.", "target": 0}
{"idx": 613, "commit_message": "initial algorithm for lab5, not working yet, debugging to be done", "target": 0}
{"idx": 1610, "commit_message": "DISTX-539 Fix for OPSAPS-56088 now auto tunes lot of machine dependent configs. Hence there is no need to provide static values to some of these configs in cluster templates. Also value of some of the configs are sub-optimal. Thus removing following configs from 7.2.2, 7.2.6, 7.2.7 DE templates: hive.tez.container.size mapreduce.map.memory.mb mapreduce.reduce.memory.mb", "target": 0}
{"idx": 885, "commit_message": "Fix Network updating, improve docs. (#204)", "target": 0}
{"idx": 3365, "commit_message": "LU-6138 lfsck: NOT hold reference on pre-loaded object To improve the LFSCK performance, the LFSCK main engine will pre-load the object locally or remotely, then generate related LFSCK request that reference the pre-loaded object, and then push the request into related LFSCK pipeline. The LFSCK assistant thread will handle the LFSCK request some later asynchronously. Originally, the LFSCK request holds the pre-loaded object reference, so the assistant thread can handle it directly without locating the object by FID again. But holding the object reference will cause the object cannot be purged out RAM. If some LFSCK request has held the object, and some other unlinked the object before the LFSCK assistant thread handling the LFSCK request, then the unlinked object will be cached in RAM until the last reference released. Because the LFSCK main engine and assistant thread run asynchronously, we do not know when the LFSCK request that holding the object reference will be handled. If the assistant thread needs to locate the object with the same FID before that, it will fall into self-deadlock for ever.", "target": 1}
{"idx": 1436, "commit_message": "Updated Lightest Algorithm to remove need for loading levels because apparently we do not need to use them.", "target": 0}
{"idx": 2019, "commit_message": "dm kcopyd: avoid queue shuffle commit b673c3a8192e28f13e2050a4b82c1986be92cc15 upstream Write throughput to LVM snapshot origin volume is an order of magnitude slower than those to LV without snapshots or snapshot target volumes, especially in the case of sequential writes with O_SYNC on. The following patch originally written by Kevin Jamieson and Jan Blunck and slightly modified for the current RCs by myself tries to improve the performance by modifying the behaviour of kcopyd, so that it pushes back an I/O job to the head of the job queue instead of the tail as process_jobs() currently does when it has to wait for free pages. This way, write requests aren't shuffled to cause extra seeks. I tested the patch against 2.6.27-rc5 and got the following results. The test is a dd command writing to snapshot origin followed by fsync to the file just created/updated. A couple of filesystem benchmarks gave me similar results in case of sequential writes, while random writes didn't suffer much. dd if=/dev/zero of=<somewhere on snapshot origin> bs=4096 count=... [conv=notrunc when updating] 1) linux 2.6.27-rc5 without the patch, write to snapshot origin, average throughput (MB/s) 10M 100M 1000M create,dd 511.46 610.72 11.81 create,dd+fsync 7.10 6.77 8.13 update,dd 431.63 917.41 12.75 update,dd+fsync 7.79 7.43 8.12 compared with write throughput to LV without any snapshots, all dd+fsync and 1000 MiB writes perform very poorly. 10M 100M 1000M create,dd 555.03 608.98 123.29 create,dd+fsync 114.27 72.78 76.65 update,dd 152.34 1267.27 124.04 update,dd+fsync 130.56 77.81 77.84 2) linux 2.6.27-rc5 with the patch, write to snapshot origin, average throughput (MB/s) 10M 100M 1000M create,dd 537.06 589.44 46.21 create,dd+fsync 31.63 29.19 29.23 update,dd 487.59 897.65 37.76 update,dd+fsync 34.12 30.07 26.85 Although still not on par with plain LV performance - cannot be avoided because it's copy on write anyway - this simple patch successfully improves throughtput of dd+fsync while not affecting the rest.", "target": 1}
{"idx": 1037, "commit_message": "toolchain: gcc 4.5: PR tree-optimization/42906 fix", "target": 0}
{"idx": 568, "commit_message": "Revert \"seq_file: Allocate memory using vmalloc when kmalloc failed\" This reverts commit 13bad1e17a66483664eaa338650eaebb9e4347b4.", "target": 0}
{"idx": 3671, "commit_message": "Make VarInt write a byte[] instead of individual bytes. It's often more efficient to write chunks of bytes than to write individual bytes. For instance, this makes encoding geometries to a ByteArrayOutputStream 300% faster.", "target": 1}
{"idx": 1663, "commit_message": "rcu: Remove comment referring to __stop_machine() Although it used to be that CPU_DYING notifiers executed on the outgoing CPU with interrupts disabled and with all other CPUs spinning, this is no longer the case. This commit therefore removes this obsolete comment.", "target": 0}
{"idx": 2551, "commit_message": "improve select performance", "target": 1}
{"idx": 2505, "commit_message": "improved toBytes & writeJSONString performance", "target": 1}
{"idx": 2541, "commit_message": "Made Tree System more efficient, mesh combining and stuff", "target": 1}
{"idx": 3652, "commit_message": "Integration of -ng changes. Changes: - added support for externally built modules, - improved support for in-tree shared modules, - fixed diversion bugs, - configure displays some informative messages, - faster static build (libtool isn't used anymore for compiling non-PIC objects), - dependencies comparable to automake's without requiring GNU make or GCC, - working make clean for non-GNU makes.", "target": 0}
{"idx": 1334, "commit_message": "Fix performance issue with ESLint (#3586)", "target": 1}
{"idx": 4131, "commit_message": "[PATCH] Improved performance of Python State objects", "target": 1}
{"idx": 275, "commit_message": "Small build improvements. Put the extra sample apps under a project-specific subdir, don't set a cache variable that's not needed, and install more of the JSON schemas. Fixes #173.", "target": 0}
{"idx": 3577, "commit_message": "Commited patch \"Generalised improvement name generation\" (PR#1118). Patch by Ben Webb < > with style changes by me.", "target": 0}
{"idx": 1400, "commit_message": "Merge pull request #30 from madflojo/network-facts Adding curl to Dockerfile", "target": 0}
{"idx": 681, "commit_message": "default .nnetwork files are copied to bin", "target": 0}
{"idx": 4030, "commit_message": "[PATCH] more efficient comparison function", "target": 1}
{"idx": 2291, "commit_message": "Multiple changes to Nanite::Serializer, specs, and gitignore. - Ignore automatically created config.yml files. - Ignore TextMate project file. - Added specs for Nanite::Serializer. - Made SERIALIZERS constant private. - Freeze SERIALIZERS constant. - Renamed Nanite::Serializer initialized to more appropiate 'preferred_format' and default it to 'marshal'. - Ignore 'preferred_format' if is not in predefined list. - Ran code through formatter. Also did some benchmarking on 'serializer.send(action, packet)'. Replacing it with simple 'case' statement on action did not yield any significant performance improvements. Left it as is.", "target": 1}
{"idx": 247, "commit_message": "Merge pull request #1 from manwar/minor-improvements Minor improvements", "target": 0}
{"idx": 485, "commit_message": "minor optimizations", "target": 0}
{"idx": 2559, "commit_message": "Rollback commit r523088: (Harmony-3065, TLS inlining to improve performance build, build test pass on win32, RHEL4.0 32-bit w/ gcc4.0.2) DRL VM fail on Linux x86_64 with: java/lang/UnsatisfiedLinkError : Failed loading library \"libhytext.so\": DSO load failed Will reopen JIRA issue svn path=/harmony/; revision=523162", "target": 0}
{"idx": 2820, "commit_message": "Changes for 10.9 support - Detect a 10.9 ESD and make the new ISO out of BaseSystem.dmg by converting to RW, growing it and installing Extras there - It's very possible there is a more efficient method to do this - Xcode CLI tools script uses install-on-demand trigger file and a bunch of shell pipes to install from SUS", "target": 0}
{"idx": 2542, "commit_message": "Meetings iframe and iframe URL (#8096) * Make questions publicable * Introduce polls in meetings model and admin management * Revert \"Make questions publicable\" This reverts commit b8b38c80840aff7282eb6749244663c96ecd406e. * Create specific question/answer models for meetings * Live event view * Manage questions in the live event page * List questions to users * Questions polling * Answer questions * Remove animations for better UI when polling * Show answers results * Export answers in the admin * Refactor and clear Ruby offenses * Normalize translations * JS offenses * ERB offenses * Force factories to create published meetings * Increase capybara wait time for ajax requests * Adjust markup to fix styles * ERB offense * Fix bad merge * Performance: remove N+1 and optimize answer options results calculation * Fix param assignment * Show validation error * Deal with nils * Fix translation", "target": 1}
{"idx": 2797, "commit_message": "special handler for fastpath in CGC transmit syscall to avoid memory issue.", "target": 0}
{"idx": 2958, "commit_message": "WARNING : incompatibility in this revision. Tags now all start with \"@\". You have to manually change them in your projects xml files and in tags.xml. If you don't, you will loose color and you will have to open manually each task to update the tags. This is a huge factorization. The projects object are completely killed. Backend are completely different. All should go through the requester now. The only new feature is now that the localfile backend support backup. (with a number configurable) This factorization also implies a huge performance improvment for the taskbrowser. This factorisation is not complete : - backend should be detected. There is currently a manual construction of localfiles (see main.py) It should be noted that we have a bit less code so it means it was a successful refactorization.", "target": 1}
{"idx": 3551, "commit_message": "SketchView implemented for performance optimization.", "target": 1}
{"idx": 3249, "commit_message": "Extended handling of .rlu files, improved z3 interface, fixed rule-filter bug main.cc: Added patch provided by Florian for improving handling of .rlu files. Previously, if the unit is in the current directory the code in main.cc would not pick up the global user rule (if any). smtlib2-driver.cc: Added handling of warning messages in z3 output rule-filter.cc: Fixed bug introduced when did some minor refactoring on previously working code.", "target": 0}
{"idx": 3197, "commit_message": "- Improved performance by compiling hex regex and by using .at instead of .loc", "target": 1}
{"idx": 3333, "commit_message": "New version of the patch caching styles in a QHash for better performances when loading an ODT file... svn path=/trunk/koffice/; revision=668241", "target": 1}
{"idx": 3248, "commit_message": "added more efficient gridmap based off of particle filters", "target": 1}
{"idx": 3302, "commit_message": "performance optimization: use files mapping when fetching associations", "target": 1}
{"idx": 3010, "commit_message": "sse_pp32: Improved performance for all versions above 1x", "target": 1}
{"idx": 2247, "commit_message": "- Bugfix: renamed the SQL field 'types' to 'nodes' because 'types' is a reserved keyword in MySQL 4. This fixes critical bug #1618. Patch by Marco. ==> This fix requires to run update.php! - Bugfix: made sessions work without warnings when register_globals is turned off. The solution is to use $_SESSION instead of session_register(). This fixes critical bug #1797. Patch by Marco. - Bugfix: sometimes error messages where being discarded when previewing a node. Patch by Craig Courtney. - Bugfix: fixed charset problems. This fixes critical bug #1549. Patch '0023.charset.patch' by Al. - Code improvements: removed some dead code from the comment module. Patch by Marco. - Documentation improvements: polished the node module help texts and form descriptions. Patch '0019.node.module.help.patch' by Al. - CSS improvements all over the map! Patch '0021.more.css.patch' by Al. - GUI improvements: improved the position of Druplicon in the admin menu. Patch '0020.admin.logo.patch' by Al. - GUI improvements: new logos for theme Marvin and theme UnConeD. Logos by Kristjan Jansen. - GUI improvements: small changes to the output emitted by the profile module. Suggestions by Steven Wittens. - GUI improvements: small fixes to Xtemplate. Patch '0022.xtemplate.css.patch' by Al. TODO: - Some modules such as the buddy list module and the annotation module in the contributions repository are also using session_register(). They should be updated. We should setup a task on Drupal. - There is code emitting '<div align=\"right\">' which doesn't validate. - Does our XML feeds validate with the charset changes? - The forum module's SQL doesn't work properly on PostgreSQL.", "target": 0}
{"idx": 2582, "commit_message": "[go/perfbot-sheriff] Make flakiness dashboard link better. Since the most common location for a sheriff to go is to the performance_test_suite, we should link to that.", "target": 0}
{"idx": 3150, "commit_message": "Memory usage and performance improvements on large key bases. Rerefences #10", "target": 1}
{"idx": 1797, "commit_message": "Added network sequence diagrams basics using electron for headless rendering", "target": 0}
{"idx": 1517, "commit_message": "enabled paging; mapped boot.s at same virtual address as physical address; mapped video memory (not working yet)", "target": 0}
{"idx": 822, "commit_message": "gpu: ipu-v3: Add VDI input IDMAC channels Adds the VDIC field input IDMAC channels. These channels transfer fields F(n-1), F(n), and F(N+1) from memory to the VDIC (channels 8, 9, 10 respectively).", "target": 0}
{"idx": 2690, "commit_message": "SOLR-2950: Improve QEC performance by dropping field cache use and keeping a local smaller map", "target": 1}
{"idx": 402, "commit_message": "Workload Consolidation: consolidate RT tasks to fewer capable CPUs Shield the CPUs in the RT scheduler when non-shielded CPUs can handle the workload.", "target": 0}
{"idx": 3732, "commit_message": "Convert SDL_malloc to SDL_calloc if appropriate, slightly faster on operating systems which map the zero page for memory allocations. OpenGL renderer in progress --HG-- extra : convert_revision : svn%3Ac70aab31-4412-0410-b14c-859654838e24/trunk%401965", "target": 1}
{"idx": 3141, "commit_message": "357 fixed performance issue. 4.27s for now", "target": 1}
{"idx": 2845, "commit_message": "make the decorator a little more efficient", "target": 1}
{"idx": 4077, "commit_message": "[PATCH] Better performance (~10-15% better)\n\nBy removing several tests (and use CGAL::max instead), the generated\nassembly is more efficient.", "target": 1}
{"idx": 2263, "commit_message": "use getrandbits instead of randrange in frand, which is much more efficient", "target": 1}
{"idx": 2422, "commit_message": "small performance optimizations 02", "target": 1}
{"idx": 2917, "commit_message": "Fixed a HUGE performance flaw in the sph_model visibility test. Was O(n) instead of O(log n) as it should be. SO much smoother now.", "target": 1}
{"idx": 3512, "commit_message": "Improving UI performance", "target": 1}
{"idx": 3058, "commit_message": "[reclient] Set up 3 comparison builders with remote links Adds 3 comparison builders: - Comparison Linux (reclient vs reclient remote links)(small) - Comparison Linux (reclient vs reclient remote links)(medium) - Comparison Linux (reclient vs reclient remote links)(large) which will each target different pools to compare the performance of chromium builds with remote links enabled against n8, n16, and n32 machines. Pool targeting is controlled using the rbe_link_cfg_file gn arg. Bug: b/259252150", "target": 0}
{"idx": 881, "commit_message": "Optimize mpfr_cos. From 3363 / 21663.99 / 79727 to 3139 / 18920.58 / 69624 (opteron). [From SVN r3152 (trunk)]", "target": 0}
{"idx": 2558, "commit_message": "mingw: do not special-case .exe files anymore Since baaf233 (connect: improve check for plink to reduce false positives, 2015-04-26), t5601 writes out a `plink.exe` for testing that is actually a shell script. So the assumption that the `.exe` extension implies that the file is *not* a shell script is now wrong. The original idea to special-case `.exe` files was probably to help performance, but since we are in a code path that involves spawning a new process (which in and of itself is pretty slow on Windows anyway), we pursue a better idea to improve performance elsewhere: we try to convert scripts into builtins and to reduce the number of spawned processes by adding more internal API calls.", "target": 1}
{"idx": 588, "commit_message": "try not to reoptimize the same images every time", "target": 0}
{"idx": 1186, "commit_message": "Improvements", "target": 0}
{"idx": 3286, "commit_message": "readahead: fix pipeline break caused by block plug commit 3deaa7190a8da38453c4fabd9dec7f66d17fff67 upstream. Herbert Poetzl reported a performance regression since 2.6.39. The test is a simple dd read, but with big block size. The reason is: T1: ra (A, A+128k), (A+128k, A+256k) T2: lock_page for page A, submit the 256k T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted because of plug and there isn't any lock_page till we hit page A+256k because all pages from A to A+256k is in memory T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't submitted again. T5: lock_page A+256k, so (A+256k, A+512k) will be submitted. The task is waitting for (A+256k, A+512k) finish. There is no request to disk in T3 and T4, so readahead pipeline breaks. We really don't need block plug for generic_file_aio_read() for buffered I/O. The readahead already has plug and has fine grained control when I/O should be submitted. Deleting plug for buffered I/O fixes the regression. One side effect is plug makes the request size 256k, the size is 128k without it. This is because default ra size is 128k and not a reason we need plug here. Vivek said: : We submit some readahead IO to device request queue but because of nested : plug, queue never gets unplugged. When read logic reaches a page which is : not in page cache, it waits for page to be read from the disk : (lock_page_killable()) and that time we flush the plug list. : : So effectively read ahead logic is kind of broken in parts because of : nested plugging. Removing top level plug (generic_file_aio_read()) for : buffered reads, will allow unplugging queue earlier for readahead.", "target": 1}
{"idx": 3053, "commit_message": "more efficient PopulateLocaleCols and related functions", "target": 1}
{"idx": 3349, "commit_message": "f2fs: readahead multi pages of directory for performance We have no so such readahead mechanism in ->iterate() path as the one in ->read() path, it cause low performance when we read large directory. This patch add readahead in f2fs_readdir() for better performance.", "target": 1}
{"idx": 380, "commit_message": "Added missing platform dlls. Refractoring and bugfixes. Added SteamVR tab: Supersampling and reprojection settings and restart button. Added Statistics tab: rotation counter, performance statistics and others. Updated Move Playspace tab: Rotation and Seated mode support.", "target": 0}
{"idx": 2285, "commit_message": "crypto: msm: add dynamic engine assignment support This patch provides dynamic engine assignment for better performances. A platform may configure to support multiple crypto engines. Crypto engine assignment to a tranformer(tfm) can be dynamic. Engine assignment is deferred until a request of a tfm is served. In contrary, for static assignment, a crypto engine is statically assigned to a transformer. This patch supports both schemes. A transformer can issue multiple asynchronous requests in parallel. In case of static assignment, requests of the same tfm are served in sequence by the same engine. In case of dynamic assignment, requests can be issued in parallel to different hardware engines. There is a requirement as such, \"for any tfm, ablkcipher, aead, or ahash, they must return results in the order they were given.\" In case of dynamic assignment, the order of completion from different hardware engines may not be in the same order as requests issued. Driver needs to re-sequence the order of response for a tfm to meet the requirement.", "target": 1}
{"idx": 3256, "commit_message": "Tune performance test parameters. BUG=413249 [URL] NOTRY=true Review URL: [URL]/661403003", "target": 0}
{"idx": 3924, "commit_message": "MATH: More efficient matrix multiplication implementation for 3x3 and 4x4 matrix specializations.", "target": 1}
{"idx": 2747, "commit_message": "SceneInspector : Improved performance. Using the LazyMethod decorator on the DiffColumn means that most sections defer their update until they are opened - this is particularly important for the Sets and Sets Membership sections, since they are less frequently used but more expensive than most. Fixes #1050.", "target": 1}
{"idx": 3448, "commit_message": "Merge pull request #58 from 9whirls/nf improve showhistory performance", "target": 1}
{"idx": 3004, "commit_message": "Add back reasoner worse performance but was a requirement :( And add a fix for Jena 2.4", "target": 0}
{"idx": 471, "commit_message": "Test that network_interface is explicitly set on POST/PATCH This patch adds unit tests to ensure that node POST and PATCH requests always end up with the network interface explicitly set, based on the previous patch that changes the node object to always set this attribute.", "target": 0}
{"idx": 2208, "commit_message": "improved performance for tolerant parsing", "target": 1}
{"idx": 2109, "commit_message": "garden-o-matic should not fetch from debug bots if it also knows about the release bots [URL]/show_bug.cgi?id=86916 Reviewed by Adam Barth. This change pushes all of the logic for rebaselining a cluster of failures (a list of tests failing a list of suffixes on a list of bots) onto the server, so there is a single call from the web page; we will then be able to optimize the performance of the rebaselining better. Also remove the 'optimizebaseline' entry point on garden-o-matic (and the client-side call) since we don't need it any more. * [URL]-config/public_html/TestFailures/scripts/checkout.js: * [URL]-config/public_html/TestFailures/scripts/checkout_unittests.js: * Scripts/webkitpy/tool/servers/gardeningserver.py: (GardeningHTTPRequestHandler.rebaselineall): * Scripts/webkitpy/tool/servers/gardeningserver_unittest.py:", "target": 0}
{"idx": 3009, "commit_message": "more performance improvements", "target": 1}
{"idx": 3884, "commit_message": "Removing one more change to EventBinding that proved unnecessary vis-a-vis cross context dispatch improvements", "target": 0}
{"idx": 2567, "commit_message": "JavaScriptCore: Reviewed by Maciej Stachowiak. Fixed [URL]/show_bug.cgi?id=15490 Iteration statements sometimes incorrectly evaluate to the empty value (KDE r670547). [ Broken off from [URL]/show_bug.cgi?id=14868 ] This patch is a merge of KDE r670547, with substantial modification for performance. It fixes do-while statements to evaluate to a value. (They used to evaluate to the empty value in all cases.) It also fixes SourceElementsNode to maintain the value of abnormal completions like \"break\" and \"continue.\" It also re-works the main execution loop in SourceElementsNode so that it (1) makes a little more sense and (2) avoids unnecessary work. This is a .28% speedup on command-line JS iBench. * kjs/nodes.cpp: (DoWhileNode::execute): (SourceElementsNode::execute): LayoutTests: Reviewed by Maciej Stachowiak. Layout tests for [URL]/show_bug.cgi?id=15490 Iteration statements sometimes incorrectly evaluate to the empty value (KDE r670547) * fast/js/do-while-expression-value-expected.txt: Added. * fast/js/do-while-expression-value.html: Added. * fast/js/while-expression-value-expected.txt: Added. * fast/js/while-expression-value.html: Added.", "target": 0}
{"idx": 3055, "commit_message": "Revert \"Add a hack to set shouldPaint to true for force-composited iframes.\" Reason: Exploring as possible cause of compositing/paint time performance regression. Original change description: This works around a FCB chicken-egg problem in which we are unable to properly start painting floating iframes once they stop being composited. [URL]/2009353003 BUG=610906,619710 Review-Url: [URL]/2164813002", "target": 0}
{"idx": 2662, "commit_message": "refactoring work (APIs, build dependency, ... etc) This is a major re-work in rmc with several key changes: To query board-specific data in a rmc database, developers now can take advantage of new defined APIs, either using single-action APIs for a better performance of multiple queries or simply calling a double-action API, which is sufficient to perform a query in one step, to reduce the footprint in code at client side. (Refer to inc/rmc_api.h.) rmc tool itself is modified to use new APIs. Memory map and allocated data can be released after use in user space. In EFI context, rmc doesn't allocate memory for anything returned. In EFI build, necessary data types are defined in rmc in order to get rid of dependency on any EFI implementation. Miscellaneous fixes are also included when we move the pieces around.", "target": 1}
{"idx": 1605, "commit_message": "methods improved", "target": 0}
{"idx": 2183, "commit_message": "a few more tests for the parameter and coefficient clauses and first use of them in a runner. The optimization wrapper is not included yet.", "target": 0}
{"idx": 3506, "commit_message": "quant: remove floating point operations from RDOQ [CHANGES OUTPUTS] The output changes are minor. On modern CPUs the performance benefit of this change is negligable since SSE double operations are similar in performance to int64 operations. As a future optimization, we need to figure out how to multiply lambda2 (FIX8 24bits) by signal cost (FIX15 24bits) using 32-bit integers since 32bit multiply is significantly cheaper than 64bit integer multiply. Similarly, unquantAbsLevel can be larger than 16bits so multiplation is done with int64. Note that we use signed int64 because with psy-rdoq the costs could go negative.", "target": 0}
{"idx": 3730, "commit_message": "[SPARK-5614][SQL] Predicate pushdown through Generate. Now in Catalyst's rules, predicates can not be pushed through \"Generate\" nodes. Further more, partition pruning in HiveTableScan can not be applied on those queries involves \"Generate\". This makes such queries very inefficient. In practice, it finds patterns like ```scala Filter(predicate, Generate(generator, _, _, _, grandChild)) ``` and splits the predicate into 2 parts by referencing the generated column from Generate node or not. And a new Filter will be created for those conjuncts can be pushed beneath Generate node. If nothing left for the original Filter, it will be removed. For example, physical plan for query ```sql select len, bk from s_server lateral view explode(len_arr) len_table as len where len > 5 and day = '20150102'; ``` where 'day' is a partition column in metastore\u0010 is like this in current version of Spark SQL: > Project [len, bk] > > Filter ((len > \"5\") && \"(day = \"20150102\")\") > > Generate explode(len_arr), true, false > > HiveTableScan [bk, len_arr, day], (MetastoreRelation default, s_server, None), None But theoretically the plan should be like this > Project [len, bk] > > Filter (len > \"5\") > > Generate explode(len_arr), true, false > > HiveTableScan [bk, len_arr, day], (MetastoreRelation default, s_server, None), Some(day = \"20150102\") Where partition pruning predicates can be pushed to HiveTableScan nodes. Author: Lu Yan [URL]> Closes #4394 from ianluyan/ppd and squashes the following commits: a67dce9 [Lu Yan] Fix English grammar. 7cea911 [Lu Yan] Revised based on @marmbrus's opinions ffc59fc [Lu Yan] [SPARK-5614][SQL] Predicate pushdown through Generate.", "target": 1}
{"idx": 1148, "commit_message": "Merge branch 'mips64-improvements'", "target": 0}
{"idx": 3998, "commit_message": "[PATCH] Add doSetup parameter to Matrix::init\n\nNot calling doSetup can give you performance gains when using\npreallocation on the matrix.", "target": 1}
{"idx": 3469, "commit_message": "added on-demand bagging to improve performance for resource metadata editing for large resource files", "target": 1}
{"idx": 2741, "commit_message": "Make JavaFX rendering a bit more efficient by providing own NGRegion implementation, and pingponging buffers better", "target": 1}
{"idx": 3081, "commit_message": "Changed object hash list to be a list rather than a tailq. This saves space for the hash list buckets and is a little faster. The features of tailq aren't needed. Increased the size of the object hash table to improve performance. In the future, this will be changed so that the table is sized dynamically.", "target": 1}
{"idx": 3734, "commit_message": "workqueue: reimplement work_on_cpu() using system_wq commit ed48ece27cd3d5ee0354c32bbaec0f3e1d4715c3 upstream. The existing work_on_cpu() implementation is hugely inefficient. It creates a new kthread, execute that single function and then let the kthread die on each invocation. Now that system_wq can handle concurrent executions, there's no advantage of doing this. Reimplement work_on_cpu() using system_wq which makes it simpler and way more efficient. stable: While this isn't a fix in itself, it's needed to fix a workqueue related bug in cpufreq/powernow-k8. AFAICS, this shouldn't break other existing users.", "target": 1}
{"idx": 1358, "commit_message": "Improve tasks queries. - Make get_tasks() always fetch TaskResultSummary directly instead of TaskRequest. Cuts I/O by half and speed up the process. - Make it possible to filter by both tags and state. - Add filters to bot.tasks query. - This is possible due to TaskResultSummary.tags which is a cache of TaskRequest.tags. - Reorder function arguments to make it coherent across a few functions. - Add utility fetch_page() function to take care of cursor encoding. [URL] BUG= Review URL: [URL]/1449893002", "target": 1}
{"idx": 3601, "commit_message": "Performance fixes", "target": 1}
{"idx": 2791, "commit_message": "Add a way to lookup entity IDs by a component ID When an entity is created, adds the component IDs they have to :entities and adds the entity to the :components in the game state. This cleans up the old way of getting entities by a component ID and makes it more intuitive and more efficient (probably). - Fix ecs/mk-component to update the state rather than overwrite it", "target": 1}
{"idx": 3533, "commit_message": "Fix rebuild-chainstate feature and improve its performance Previous refactorings broke the ability to rebuild the chainstate by deleting the chainstate directory, resulting in an incorrect \"Incorrect or no genesis block found\" error message. Fix that. Also, improve the performance of ActivateBestBlockStep by using the skiplist to only discover a few potential blocks to connect at a time, instead of all blocks forever - as we likely bail out after connecting a single one anyway.", "target": 1}
{"idx": 1379, "commit_message": "added SDECCollision.xml - looks really good with timestep 0.003(FIXME - should be inside .xml) and moment law uncommented(FIXME) in SDECDynamicEngine", "target": 0}
{"idx": 2286, "commit_message": "Make replace_masked_values more efficient by using masked_fill (#1651)", "target": 1}
{"idx": 2602, "commit_message": "[PATCH] at76_usb: Improve dump of MAC_ADDR Don't require DEBUG to be defined. Dump each group address separately, next to the status. Rename MIB_MAC_ADD to MIB_MAC_ADDR.", "target": 0}
{"idx": 1344, "commit_message": "further use of sleef to speedup drago", "target": 1}
{"idx": 2519, "commit_message": "Improve the performance to export layers", "target": 1}
{"idx": 372, "commit_message": "memory leak on probabilistic notes (closes #1321)", "target": 0}
{"idx": 656, "commit_message": "improved note output", "target": 0}
{"idx": 2179, "commit_message": "mem: Add a simple adaptive version of the open-page policy This patch adds a basic adaptive version of the open-page policy that guides the decision to keep open or close by looking at the contents of the controller queues. If no row hits are found, and bank conflicts are present, then the row is closed by means of an auto precharge. This is a well-known technique that should improve performance in most use-cases.", "target": 1}
{"idx": 1902, "commit_message": "Merge remote-tracking branch 'origin/EmpDao_improvement' into EmployeeService # Conflicts: #\tsrc/main/java/ru/technoserv/controller/EmployeeController.java #\tsrc/main/java/ru/technoserv/services/EmployeeService.java #\tsrc/main/java/ru/technoserv/services/EmployeeServiceImpl.java", "target": 0}
{"idx": 1054, "commit_message": "fixed minor memory leaks", "target": 0}
{"idx": 2562, "commit_message": "Merge tag 'f2fs-for-5.5' of [URL]/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull f2fs updates from Jaegeuk Kim: \"In this round, we've introduced fairly small number of patches as below. Enhancements: - improve the in-place-update IO flow - allocate segment to guarantee no GC for pinned files Bug fixes: - fix updatetime in lazytime mode - potential memory leak in f2fs_listxattr - record parent inode number in rename2 correctly - fix deadlock in f2fs_gc along with atomic writes - avoid needless data migration in GC\" * tag 'f2fs-for-5.5' of [URL]/pub/scm/linux/kernel/git/jaegeuk/f2fs: f2fs: stop GC when the victim becomes fully valid f2fs: expose main_blkaddr in sysfs f2fs: choose hardlimit when softlimit is larger than hardlimit in f2fs_statfs_project() f2fs: Fix deadlock in f2fs_gc() context during atomic files handling f2fs: show f2fs instance in printk_ratelimited f2fs: fix potential overflow f2fs: fix to update dir's i_pino during cross_rename f2fs: support aligned pinned file f2fs: avoid kernel panic on corruption test f2fs: fix wrong description in document f2fs: cache global IPU bio f2fs: fix to avoid memory leakage in f2fs_listxattr f2fs: check total_segments from devices in raw_super f2fs: update multi-dev metadata in resize_fs f2fs: mark recovery flag correctly in read_raw_super_block() f2fs: fix to update time in lazytime mode", "target": 1}
{"idx": 2846, "commit_message": "[TTS] Delay instrumentation tests at startup for flakes. New startup timing causes flaky tests due to higher performance. This CL introduces a workaround that allows delays the the Contextual Search instrumentation test startup sequence for all CS tests. Contextual Search instrumentation tests now use the legacy delay. Also removes an attempted workaround for TTS that failed. BUG=635661 Review-Url: [URL]/2232383002", "target": 0}
{"idx": 524, "commit_message": "algorithms", "target": 0}
{"idx": 2780, "commit_message": "mt76: mt7915: avoid memcpy in rxv operation Avoid memcpy in Rx hot path to slightly improve performance.", "target": 1}
{"idx": 1537, "commit_message": "improve termination condition of recursive call of breakBlock() of MyblockBreakEventHandler", "target": 0}
{"idx": 594, "commit_message": "net: have ipconfig not wait if no dev is available [ Upstream commit cd7816d14953c8af910af5bb92f488b0b277e29d ] previous commit 3fb72f1e6e6165c5f495e8dc11c5bbd14c73385c makes IP-Config wait for carrier on at least one network device. Before waiting (predefined value 120s), check that at least one device was successfully brought up. Otherwise (e.g. buggy bootloader which does not set the MAC address) there is no point in waiting for carrier. Cc: Micha Nelissen [URL]> Cc: Holger Brunck [URL]>", "target": 0}
{"idx": 244, "commit_message": "Speed-up ScreenOrientationListenerTest. A criteria was always timing out instead of stopping when needed. BUG=None Review URL: [URL]/435193002", "target": 1}
{"idx": 989, "commit_message": "USB: metro-usb: fix tty_flip_buffer_push use commit b7d28e32c93801d60c1a7a817f774a02b7bdde43 upstream. Do not set low_latency flag at open as tty_flip_buffer_push must not be called in IRQ context with low_latency set.", "target": 0}
{"idx": 939, "commit_message": "Fixes some issues with jsonfile write/read This cleans up some of the use of the filepoller which makes reading significantly more robust and gives fewer changes to fallback to the polling based watcher. In a lot of cases, if the file was being rotated while we were adding it to the watcher, it would return an error that the file doesn't exist and would fallback. In some cases this fallback could be triggered multiple times even if we were already on the fallback/poll-based watcher. It also fixes an open file leak caused by not closing files properly on rotate, as well as not closing files that were read via the `tail` function until after the log reader is completed. Prior to the above changes, it was relatively simple to cause the log reader to error out by having quick rotations, for example: ``` $ docker run --name test --log-opt max-size=10b --log-opt max-files=10 -d busybox sh -c 'while true; do usleep 500000; echo hello; done' $ docker logs -f test ``` After these changes I can run this forever without error. Another fix removes 2 `os.Stat` calls when rotating files. The stat calls are not needed since we are just calling `os.Rename` anyway, which will in turn also just produce the same error that `Stat` would. These `Stat` calls were also quite expensive. Removing these stat calls also seemed to resolve an issue causing slow memory growth on the daemon.", "target": 1}
{"idx": 2033, "commit_message": "Update README.md + add appveyor badge for running tests on windows + remove \"dependencies out of date\" badge in favor of `kit` (to review which dependency versions the core team has vetted by hand, in production, and with our automated tests, run [`kit [URL]/mikermcneil/kit). Happy to change, patch, and/or maintain those dependencies as needed to fix bugs, improve performance, reduce overall `npm install` size, or get rid of deprecation messages. To suggest a change like that, please send a pull request to [URL]/mikermcneil/kit/blob/master/constants/trusted-releases.type.js)", "target": 0}
{"idx": 3205, "commit_message": "Address reviews - Some performance improvements - Method renaming", "target": 1}
{"idx": 2770, "commit_message": "mm: reduce the amount of work done when updating min_free_kbytes commit 938929f14cb595f43cd1a4e63e22d36cab1e4a1f upstream. Stable note: Fixes [URL]/show_bug.cgi?id=726210 . Large machines with 1TB or more of RAM take a long time to boot without this patch and may spew out soft lockup warnings. When min_free_kbytes is updated, some pageblocks are marked MIGRATE_RESERVE. Ordinarily, this work is unnoticable as it happens early in boot but on large machines with 1TB of memory, this has been reported to delay boot times, probably due to the NUMA distances involved. The bulk of the work is due to calling calling pageblock_is_reserved() an unnecessary amount of times and accessing far more struct page metadata than is necessary. This patch significantly reduces the amount of work done by setup_zone_migrate_reserve() improving boot times on 1TB machines. [URL]: coding-style fixes]", "target": 1}
{"idx": 3440, "commit_message": "More efficient next/previous links and more readable tests * No longer fetch and instantiate every project just to get next and previous; use 2 small queries instead * Reorganize the tests and utilize better language for readability", "target": 0}
{"idx": 3765, "commit_message": "Define trivially using qhyper thus avoiding some performance problems. 2005-05-31 Morten Welinder [URL]> \t* src/mathfunc.c (random_hypergeometric): Define trivially using \tqhyper thus avoiding some performance problems. \t(qhyper): Move from plugins/fn-r/extra.c \t(pfuncinverter): Improve case where initial guess is outside valid \trange.", "target": 1}
{"idx": 2097, "commit_message": "generator performance improvements etc", "target": 1}
{"idx": 3696, "commit_message": "UPSTREAM: zsmalloc: move pages_allocated to zs_pool (cherry-pick from commit 13de8933c96b4557f667c337676f05274e017f83) Currently, zram has no feature to limit memory so theoretically zram can deplete system memory. Users have asked for a limit several times as even without exhaustion zram makes it hard to control memory usage of the platform. This patchset adds the feature. Patch 1 makes zs_get_total_size_bytes faster because it would be used frequently in later patches for the new feature. Patch 2 changes zs_get_total_size_bytes's return unit from bytes to page so that zsmalloc doesn't need unnecessary operation(ie, << PAGE_SHIFT). Patch 3 adds new feature. I added the feature into zram layer, not zsmalloc because limiation is zram's requirement, not zsmalloc so any other user using zsmalloc(ie, zpool) shouldn't affected by unnecessary branch of zsmalloc. In future, if every users of zsmalloc want the feature, then, we could move the feature from client side to zsmalloc easily but vice versa would be painful. Patch 4 adds news facility to report maximum memory usage of zram so that this avoids user polling frequently via /sys/block/zram0/ mem_used_total and ensures transient max are not missed. This patch (of 4): pages_allocated has counted in size_class structure and when user of zsmalloc want to see total_size_bytes, it should gather all of count from each size_class to report the sum. It's not bad if user don't see the value often but if user start to see the value frequently, it would be not a good deal for performance pov. This patch moves the count from size_class to zs_pool so it could reduce memory footprint (from [255 * 8byte] to [sizeof(atomic_long_t)]). Bug: 25951511", "target": 1}
{"idx": 2583, "commit_message": "ENGR00180099 IPU-[MX6DL]: ipu performance test cause kernel dump 1. under vte test environment, boot up and run ipu_test.sh 7. kernel dump occur kernel BUG at drivers/mxc/ipu3/ipu_device.c:1153! Unable to handle kernel NULL pointer dereference at virtual address 00000000 pgd = 80004000 [00000000] *pgd=00000000 Internal error: Oops: 817 [#1] PREEMPT SMP Modules linked in: ov3640_camera adv7180_tvin ov5640_camera_mipi camera_sensor_clock CPU: 0 Not tainted (3.0.15-daily-01339-gddc0ae9 #1) PC is at __bug+0x1c/0x28 LR is at __bug+0x18/0x28 pc : [<80042210>] lr : [<8004220c>] psr: 60000013 sp : b41bfc80 ip : c0924000 fp : 00000000 r10: 00000000 r9 : 00000000 r8 : 00000000 r7 : 80a9e0f0 r6 : 80a9e10a r5 : 00000001 r4 : b46f0800 r3 : 00000000 r2 : 80a4e57c r1 : 60000093 r0 : 00000038 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment kernel Control: 10c53c7d Table: 4481404a DAC: 00000015 Process ipu0_task (pid: 400, stack limit = 0xb41be2f0) Stack: (0xb41bfc80 to 0xb41c0000) fc80: 80a9a180 8036fe24 b46f0800 00000000 60000013 b41be000 b46f0800 80373750 fca0: b41bfcd4 80065104 b418aadc 80096598 b418a8ac 60000013 00000000 b418a7e0 fcc0: b41bfcdc b418a7e8 80a4e598 8006a814 b418a7e0 80065268 00000000 00000094 fce0: 00000000 60000013 b41be000 b46f0800 00000000 00000000 00000000 803742e8 fd00: 00000000 b41bfd04 b41bfd04 00000000 00000000 00000000 00000000 00000000 fd20: 00000000 b41be000 00000001 00000000 00000000 b41bfd34 b41bfd34 00000000 fd40: 00000000 00000000 00000001 00000000 00000000 b41bfd54 b41bfd54 00000000 fd60: b41bfd60 b41bfd60 00000000 b4198474 00000000 800f9a34 00000000 800f8f4c fd80: 00000000 b2000170 b41bfe18 b2001088 80a3bcfc b2000194 b2010bfc b2010bb0 fda0: b41bfeb8 800f86d0 00000000 b41bfe24 b41bfe34 b41bfeb8 00000000 b2000194 fdc0: b2010bb0 00000000 b2001088 800f6e00 b400f0a0 b41bfe18 b2010bb0 b2001088 fde0: b41bfe36 b41bff17 ffffffff b41bff24 b41bfe36 b41bff17 ffffffff b41bff24 fe00: 0000002f b41bff24 00000002 00000003 0000000a ffffffff 00000000 00000000 fe20: 00000000 00000000 b2010bb0 b4114700 b400faa0 39316240 ffffff33 b41be000 fe40: b41be000 b41bff80 00000000 800e69fc 80a3bcc0 8005dc90 80a7b8ec ffffffff fe60: b4050000 00000002 00000000 80087e2c 00003e80 00000000 b4189998 8c008f90 fe80: 28345a72 b41bfeac 00003e80 00000000 b4189960 8c008f90 b4189960 8c008f90 fea0: 80a7b8ec ffffffff b4126000 00000002 00000000 80087e2c 80a3bcc0 8005dc90 fec0: 0000091d 00000000 b418a818 8c008f90 299c8c15 b41bfefc b418a818 8c008f90 fee0: b418a820 8005c700 00000000 8c008f90 b418a818 b418a818 b41bff14 8005e1d8 ff00: 00000002 8c008f40 b418a7e0 b41be000 b41bffc4 804ba600 8c008f40 b4040000 ff20: b41bff44 8005f920 b4040000 8c008f40 80037f40 80037f40 800371b4 80037f40 ff40: 80037f40 80037f40 800371b4 80037f40 b4189960 b403fe38 80064f14 00000001 ff60: b403fe44 00000000 00000000 00000003 b41bffa4 8005bec0 ffffffff 00000000 ff80: b4040000 00000000 b418a7e0 80082ebc b41bff90 b41bff90 00000031 803741e5 ffa0: 00000013 b403fe28 80a9e138 803741e4 00000013 00000000 00000000 00000000 ffc0: 00000000 80082898 8003fa08 00000000 80a9e138 00000000 00000000 00000000 ffe0: b41bffe0 b41bffe0 b403fe28 80082818 8003fa08 8003fa08 00000000 00000000 [<80042210>] (__bug+0x1c/0x28) from [<8036fe24>] (_get_vdoa_ipu_res+0x23c/0x25c) [<8036fe24>] (_get_vdoa_ipu_res+0x23c/0x25c) from [<80373750>] (get_res_do_task+0x10/0x458) [<80373750>] (get_res_do_task+0x10/0x458) from [<803742e8>] (ipu_task_thread+0x104/0xa60) [<803742e8>] (ipu_task_thread+0x104/0xa60) from [<80082898>] (kthread+0x80/0x88) [<80082898>] (kthread+0x80/0x88) from [<8003fa08>] (kernel_thread_exit+0x0/0x8) Code: e59f0010 e1a01003 eb11d231 e3a03000 (e5833000) ---[ end trace db719a475f81b6f8 ]---", "target": 0}
{"idx": 1130, "commit_message": "Merge pull request #90 from codeforamerica/install-improvements-#89 Install improvements", "target": 0}
{"idx": 3217, "commit_message": "Hint at FIXED_POINT for better (SBR) performance.", "target": 1}
{"idx": 192, "commit_message": "Simulator update. Memory read/write functions implemented. updateCpu function mostly implemented.", "target": 0}
{"idx": 2095, "commit_message": "Removed horizontal grid visualization/tracking and improved performance", "target": 1}
{"idx": 845, "commit_message": "Merge pull request #6063 from ampproject/add/6053-add-wp-cli-wrapper-for-amp-binary Refactor CLI subsystem and add optimize command", "target": 0}
{"idx": 463, "commit_message": "block, bfq: add Early Queue Merge (EQM) to BFQ-v7r8 for 3.10.8+ A set of processes may happen to perform interleaved reads, i.e.,requests whose union would give rise to a sequential read pattern. There are two typical cases: in the first case, processes read fixed-size chunks of data at a fixed distance from each other, while in the second case processes may read variable-size chunks at variable distances. The latter case occurs for example with QEMU, which splits the I/O generated by the guest into multiple chunks, and lets these chunks be served by a pool of cooperating processes, iteratively assigning the next chunk of I/O to the first available process. CFQ uses actual queue merging for the first type of rocesses, whereas it uses preemption to get a sequential read pattern out of the read requests performed by the second type of processes. In the end it uses two different mechanisms to achieve the same goal: boosting the throughput with interleaved I/O. This patch introduces Early Queue Merge (EQM), a unified mechanism to get a sequential read pattern with both types of processes. The main idea is checking newly arrived requests against the next request of the active queue both in case of actual request insert and in case of request merge. By doing so, both the types of processes can be handled by just merging their queues. EQM is then simpler and more compact than the pair of mechanisms used in CFQ. Finally, EQM also preserves the typical low-latency properties of BFQ, by properly restoring the weight-raising state of a queue when it gets back to a non-merged state.", "target": 1}
{"idx": 886, "commit_message": "Improve spacing", "target": 0}
{"idx": 1204, "commit_message": "minor improvements", "target": 0}
{"idx": 3482, "commit_message": "powerpc/oprofile: Handle events that raise an exception without overflowing commit ad5d5292f16c6c1d7d3e257c4c7407594286b97e upstream. Commit 0837e3242c73566fc1c0196b4ec61779c25ffc93 fixes a situation on POWER7 where events can roll back if a specualtive event doesn't actually complete. This can raise a performance monitor exception. We need to catch this to ensure that we reset the PMC. In all cases the PMC will be less than 256 cycles from overflow. This patch lifts Anton's fix for the problem in perf and applies it to oprofile as well.", "target": 0}
{"idx": 1284, "commit_message": "feat(groovy): Improve request and response for @OnRequestContent and @OnResponseContent Closes gravitee-io/issues#692", "target": 0}
{"idx": 193, "commit_message": "Reintroduce layer+network info", "target": 0}
{"idx": 2289, "commit_message": "improve performance of list rendering", "target": 1}
{"idx": 2614, "commit_message": "nvme: implement mq_ops->commit_rqs() hook Split the command submission and the SQ doorbell ring, and add the doorbell ring as our ->commit_rqs() hook. This allows a list of requests to be issued, with nvme only writing the SQ update when it's necessary. This is more efficient if we have lists of requests to issue, particularly on virtualized hardware, where writing the SQ doorbell is more expensive than on real hardware. For those cases, performance increases of 2-3x have been observed. The use case for this is plugged IO, where blk-mq flushes a batch of requests at the time.", "target": 1}
{"idx": 120, "commit_message": "gpu: Remove more glClear calls with gl_clear_broken BUG=612474 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Review-Url: [URL]/2180963003", "target": 0}
{"idx": 1230, "commit_message": "Changed: Add attachments directory to .gitignore. Changed: More attachment improvements using FineUploader.js. Lots and lots of changes here. Changed: PM Export now uses built-in PHP zip support. Probably needs PHP 5.3.x.", "target": 0}
{"idx": 1978, "commit_message": "This is a stable version of plugin_cepa. This version is cpu-only, so all references to cuda and gpus are removed - I've gone back to the standard Makefile (this one is grabbed from plugin_mp2). I removed some extraneous headers as well. Former-commit-id: 880f705504bd847f8aa191490f0f13ddf8d5f36b", "target": 0}
{"idx": 1064, "commit_message": "Make it possible to disable animation rendering in background Now, when the cache is recalculated before every playback start, we can let the user to disable background rendering of the animation. That will save him cpu cycles and battery life. See the option in Preferences->General->Recalculate animation cache in background [URL]", "target": 0}
{"idx": 758, "commit_message": "optimize build rules", "target": 1}
{"idx": 2082, "commit_message": "Merge branch 'develop' into feature/tests * develop: Chg: Upgrade dependencies to latest versions Chg: Extract access_token into a separate variable (JMeter) Fix: Invalidate cached project when changing contents Chg: Improve performance by using fewer SQL queries (resolves #52) Fix: Fix alignment of input labels in Safari/Firefox (resolves #51) New: Add validation to model update New: Add more info to DTOs New: Add permission checks to message and locale API Chg: Simplify Keycloak configuration Chg: Remove duplicate API methods in web UI (resolves #48) Chg: Use keycloak and github as auth providers in dev mode New: Add missing messages New: Allow using the API without access_token (resolves #46) Fix: Fix merging users (resolves #47) Fix: Prevent NPE when changing list of available auth providers Chg: Display nicer messages when no content is available New: Integrate Keycloak authorization # Conflicts: #\ttranslatr/app/models/User.java #\ttranslatr/app/views/login.scala.html", "target": 1}
{"idx": 3702, "commit_message": "Try reading/writing more than one byte at a time, to improve overall performance. Note that many consumers still read one byte at a time though. This patch has once been submitted to me by Bernd Walter < >, minor tweak by me (mainly to get it running under Linux, too).", "target": 1}
{"idx": 3657, "commit_message": "Added some timers so performance can be measured. Also removed some debug statements.", "target": 0}
{"idx": 942, "commit_message": "add back decryption check to cas shell encrypt function (#3907) * add back decryption check to cas shell encrypt function * add unit test to validate all good algorithms, warn if bad alg used * Don't allow blacklisted jsypt algorirthms * Don't test decrypt in shell b/c attempt to use blacklist alg will fail * fix string format * add method to allow setting bad algorithm (for unit tests)", "target": 0}
{"idx": 2215, "commit_message": "fnic: NoFIP solicitation frame in NONFIP mode and changed IO Throttle count This patch contains following three minor fixes. 1) During Probe, fnic was sending FIP solicitation in Non FIP mode which is not expected, setting the internal fip state to Non FIP mode explicitly, avoids sending FIP frame. 2) When target goes offline, all outstanding IOs belong to the target will be terminated by driver, If the termination count is high, then it influences firmware responsiveness. To improve the responsiveness, default IO throttle count is reduced to 256. 3) Accessing Virtual Fabric Id (vfid) and fc_map of Fibre-Channel Forwarder(FCF) is invalid in fnic driver when Clear Virtual Link(CVL) is received prior to receiving flogi reject from switch. As CVL clears all FCFs.", "target": 1}
{"idx": 127, "commit_message": "Fix crash in app_voicemail during close_mailbox In r354890, a memory leak in app_voicemail was fixed by properly disposing of the allocated heard/deleted pointers. However, there are situations, particularly when no messages are found in a folder, where these pointers are not allocated and not NULL. In that case, an invalid free would be attempted, which could crash app_voicemail. As there are a number of code paths where this could occur, this patch uses the number of messages detected in the folder before it attempts to free the pointers. This resolves the crash detected in the Asterisk Test Suite's check_voicemail_nominal test. ........ Merged revisions 356797 from [URL]/svn/asterisk/branches/1.8 ........ Merged revisions 356798 from [URL]/svn/asterisk/branches/10", "target": 0}
{"idx": 3090, "commit_message": "block: Reserve only one queue tag for sync IO if only 3 tags are available In case a device has three tags available we still reserve two of them for sync IO. That leaves only a single tag for async IO such as writeback from flusher thread which results in poor performance. Allow async IO to consume two tags in case queue has three tag availabe to get a decent async write performance. This patch improves streaming write performance on a machine with such disk from ~21 MB/s to ~52 MB/s. Also postmark throughput in presence of streaming writer improves from 8 to 12 transactions per second so sync IO doesn't seem to be harmed in presence of heavy async writer.", "target": 1}
{"idx": 4085, "commit_message": "[PATCH] Enforced rotation: Minor performance optimization in\n do_flex[1,2]_lowlevel", "target": 1}
{"idx": 1617, "commit_message": "Improve keywords in package.json", "target": 0}
{"idx": 1501, "commit_message": "Some design improvements and code cleanup", "target": 0}
{"idx": 519, "commit_message": "sched/fair: Introduce C-state aware task placement for small tasks Small tasks execute for small durations. This means that the power cost of taking CPUs out of a low power mode outweigh any performance advantage of using an idle core or power advantage of using the most power efficient CPU. Introduce C-state aware task placement for small tasks. This requires a two pass approach where we first determine the most power effecient CPU and establish a band of CPUs offering a similar power cost for the task. The order of preference then is as follows: 1) Any mostly idle CPU in active C-state in the same power band. 2) A CPU with the shallowest C-state in the same power band. 3) A CPU with the least load in the same power band. 4) Lowest power CPU in a higher power band. The patch also modifies the definition of a small task. Small tasks are now determined relative to minimum capacity CPUs in the system and not the task CPU. ]", "target": 0}
{"idx": 958, "commit_message": "Merge branch 'master' of [URL]/palladin/LinqOptimizer", "target": 0}
{"idx": 3250, "commit_message": "Merge James patch with default location notification improvements.", "target": 0}
{"idx": 2275, "commit_message": "More efficient and *faster* way of reading the whole file. Previous owner's code was too slow when reading files. This was especially noticable when reading in bulk. With this optimization, reading files is much faster.", "target": 1}
{"idx": 658, "commit_message": "midltests: improve NDR64 downgrade metze", "target": 0}
{"idx": 3112, "commit_message": "Notebookfixes (#166) * MAINT make performance table flexible, show only what is needed * FIX automl-signature + logo * ADD tabs for budgets * FIX cmd-line toggle of cfp-timeslider * ADD explanation to missing plot (alt text), add budgets to docs * RM irrelevant info from overview-table * ADD axis labels to bohb plot * RM Performance Table tag * ADD color-selection in bohb-plot * FIX adjust bohb-plot-widget-sizes * FIX verbose OFF filters everything", "target": 0}
{"idx": 2368, "commit_message": "small performance optimizations when ajax submitting forms", "target": 1}
{"idx": 1159, "commit_message": "refs #2885 added [URL] as search engine", "target": 0}
{"idx": 2141, "commit_message": "Added NFS and CORE settings to Vagrantfile to improve performance (closes #79)", "target": 1}
{"idx": 3288, "commit_message": "Turn on the kUsePathBoundsForClip_RecordingFlag in bench, gm and tools that use class PictureRenderer Chrome uses this flag for recording to skpicture in order to improve performance. Therefore, skai benchmarks should run with this flag enabled, and we need gm and render_pictures test coverage to validate it. In gm, the vanilla SkPicture test step will still run without the flag to ensure that case still gets test coverage, while the SkPicture test steps that use rtree and tileGrid will now run with the flag enabled. Review URL: [URL]/7111043", "target": 1}
{"idx": 2389, "commit_message": "Performance improvement in ApplicationBundler", "target": 1}
{"idx": 630, "commit_message": "-Added Captain's Backpack & Satchel: [URL]/u/831776/comdoms.png -A few minor improvements to my own sprites -Map changes to the Kitchen so the chef is more visible to his patrons -Arrivals airlocks are no longer airless when shuttles dock", "target": 0}
{"idx": 66, "commit_message": "Add some parentheses to silence the warnings: cpuinfo.c:293: warning: suggest parentheses around && within || cpuinfo.c:297: warning: suggest parentheses around && within ||", "target": 0}
{"idx": 3193, "commit_message": "Performance optimization", "target": 1}
{"idx": 2224, "commit_message": "Dump xdebug filter to improve code coverage build performance", "target": 0}
{"idx": 1810, "commit_message": "MIPS: Fix sched_getaffinity with MT FPAFF enabled commit 1d62d737555e1378eb62a8bba26644f7d97139d2 upstream. p->thread.user_cpus_allowed is zero-initialized and is only filled on the first sched_setaffinity call. To avoid adding overhead in the task initialization codepath, simply OR the returned mask in sched_getaffinity with p->cpus_allowed.", "target": 1}
{"idx": 3106, "commit_message": "[4.0] Improve newsfeed list performance when filtering by tags (#26495)", "target": 1}
{"idx": 951, "commit_message": "Vulkan: Optimize texture upload barriers When flushing staged uploads to an image, a 64-wide bitfield is used to track subresources that are uploaded since the last barrier. If a collision is detected, a barrier is inserted and the bitfield is reset. If the image has more than 64 subresources, some subresources would map to the same bit and cause a few unnecessary barriers. Texture upload benchmarks show 5% to 10% improvement both in CPU and GPU time. Bug: angleproject:3347", "target": 1}
{"idx": 2576, "commit_message": "Merge trunk [ Albert Astals Cid ] * Focus the shutdown dialog (LP: #1417991) * Make url-dispatching scope activation when the dash is not on the main scopes (LP: #1364306) * We use autopilot3 now [ CI Train Bot ] * New rebuild forced. * Resync trunk. added: po/or.po [ Daniel d'Andrada ] * Make tst_PreviewListView and tst_GenericScopeView work in out-of- source builds * MouseArea that shows the indicators panel should cover the indicators bar only (LP: #1439318) * Surface gets active focus also with mouse clicks [ Josh Arenson ] * Add a mode option to unity8 for selecting greeter mode in the future * Remove PkgConfig include from Launcher plugin to fix cross-compile (LP: #1437702) [ Leo Arias ] * Fixed the check for string in the lock screen test. (LP: #1434518) [ Michael Terry ] * Make sure the edge tutorial is destroyed when we receive a call during the wizard. (LP: #1436349) * Skip parts of the edge tutorial that require a touch device (like spread and bottom edge). (LP: #1434712) [ Michael Zanetti ] * Don't hide stages just because they're empty (LP: #1439615) * [Launcher] fix bug where an item would disappear even though the app is running (LP: #1438172) [ Nick Dedekind ] * Fixed autopilot test failures related to udev input failure for power button. * Made improvements for laggy indicator backends (lp#1390136). (LP: #1390136) [ Pete Woods ] * GPS only goes active when the Dash is in the foreground (LP: #1434379) [ [URL]> ] * Modified the wrong time format in ScreenGrabber (LP: #1436006) [ Albert Astals Cid ] * Fix regression making pan not possible in Zoomable Image (LP: #1433506) * Make sure m_firstVisibleIndex is correctly set after processing changeSet.removes (LP: #1433056) * make pot_file [ Daniel d'Andrada ] * Don't show the rotating rect in \"make tryFoo\". [ Michael Zanetti ] * make pinlockscreen adjust better to larger displays", "target": 0}
{"idx": 2181, "commit_message": "Better orientation change performance Also fix the way things are calculated to determine the height of the shadow when there are just enough notes to not quite fill the screen. [#85 state:resolved]", "target": 0}
{"idx": 2635, "commit_message": "Rollup merge of #66913 - VirrageS:help-self, r=varkor,Centril Suggest calling method when first argument is `self` Closes: #66782 I've explored different approaches for this MR but I think the most straightforward is the best one. I've tried to find out if the methods for given type exist (to maybe have a better suggestion), but we don't collect them anywhere and collecting them is quite problematic. Moreover, collecting all the methods would require rewriting big part of the code and also could potentially include performance degradation, which I don't think is necessary for this simple case.", "target": 0}
{"idx": 1487, "commit_message": "canevas translation - first pass to improve later", "target": 0}
{"idx": 1891, "commit_message": "kernel/smp.c: free related resources when failure occurs in hotplug_cfd() When failure occurs in hotplug_cfd(), need release related resources, or will cause memory leak.", "target": 0}
{"idx": 3910, "commit_message": "Moved ScParameterizedType cache to global cache. Better performance, less memory.", "target": 1}
{"idx": 1456, "commit_message": ":ok_hand: IMPROVE: Added 'aromas' to the CSV export file data", "target": 0}
{"idx": 2529, "commit_message": "Refactor 2: move misc preprocessing funcs to Helper, make MILP conversion more efficient", "target": 1}
{"idx": 3486, "commit_message": "Performance improvement to if_contains and modify_if. Works with pointers rather than constructing iterators. A test I conducted which heavily uses these functions a ~10% improvement in runtime.", "target": 1}
{"idx": 1035, "commit_message": "Adding documentation concerning local memory allocation issues that may happen in the future.", "target": 0}
{"idx": 2299, "commit_message": "Tiny performance optimization added", "target": 1}
{"idx": 1808, "commit_message": "Improve docs for Hopper Device example.", "target": 0}
{"idx": 2733, "commit_message": "MDL-46147 modinfo: performance improvement for course page (check filterall)", "target": 1}
{"idx": 4119, "commit_message": "[PATCH] Improved GB performance in mixed precision", "target": 1}
{"idx": 252, "commit_message": "improve bash function chr", "target": 0}
{"idx": 2177, "commit_message": "- Patch #7967 by matthias: small patch to improve the robustness of the tablesorting code.", "target": 0}
{"idx": 1944, "commit_message": "Improve spec tests.", "target": 0}
{"idx": 3815, "commit_message": "chore(deps): Bump IPython from 7.31.0 to 8.4.0 Major version includes multiple bug, security & performance fixes changelog: https://ipython.readthedocs.io/en/stable/whatsnew/version8.html", "target": 1}
{"idx": 2207, "commit_message": "Merge pull request #8570 from qlyoung/revert-ringbuf-readv Revert \"bgpd: improve socket read performance", "target": 1}
{"idx": 4132, "commit_message": "[PATCH] On behalf of Ryan Olson: Checking in the changes for server\n side registration to improve performance", "target": 1}
{"idx": 2396, "commit_message": "improvement for [URL]/activemq/browse/AMQ-1936 - improve vm transport stopping performances", "target": 1}
{"idx": 1784, "commit_message": "optimize luo-meng", "target": 1}
{"idx": 1868, "commit_message": "Gave up repeatable deserialization Adding support for repeatable to castBuffer seems hard without performance impact. For now, here removes repeatable read support.", "target": 0}
{"idx": 3613, "commit_message": "SELinux: Reduce overhead of mls_level_isvalid() function call Date\tMon, 10 Jun 2013 13:55:08 -0400 v4->v5: - Fix scripts/checkpatch.pl warning. v3->v4: - Merge the 2 separate while loops in ebitmap_contains() into a single one. v2->v3: - Remove unused local variables i, node from mls_level_isvalid(). v1->v2: - Move the new ebitmap comparison logic from mls_level_isvalid() into the ebitmap_contains() helper function. - Rerun perf and performance tests on the latest v3.10-rc4 kernel. While running the high_systime workload of the AIM7 benchmark on a 2-socket 12-core Westmere x86-64 machine running 3.10-rc4 kernel (with HT on), it was found that a pretty sizable amount of time was spent in the SELinux code. Below was the perf trace of the \"perf record -a -s\" of a test run at 1500 users: 5.04% ls [kernel.kallsyms] [k] ebitmap_get_bit 1.96% ls [kernel.kallsyms] [k] mls_level_isvalid 1.95% ls [kernel.kallsyms] [k] find_next_bit The ebitmap_get_bit() was the hottest function in the perf-report output. Both the ebitmap_get_bit() and find_next_bit() functions were, in fact, called by mls_level_isvalid(). As a result, the mls_level_isvalid() call consumed 8.95% of the total CPU time of all the 24 virtual CPUs which is quite a lot. The majority of the mls_level_isvalid() function invocations come from the socket creation system call. Looking at the mls_level_isvalid() function, it is checking to see if all the bits set in one of the ebitmap structure are also set in another one as well as the highest set bit is no bigger than the one specified by the given policydb data structure. It is doing it in a bit-by-bit manner. So if the ebitmap structure has many bits set, the iteration loop will be done many times. The current code can be rewritten to use a similar algorithm as the ebitmap_contains() function with an additional check for the highest set bit. The ebitmap_contains() function was extended to cover an optional additional check for the highest set bit, and the mls_level_isvalid() function was modified to call ebitmap_contains(). With that change, the perf trace showed that the used CPU time drop down to just 0.08% (ebitmap_contains + mls_level_isvalid) of the total which is about 100X less than before. 0.07% ls [kernel.kallsyms] [k] ebitmap_contains 0.05% ls [kernel.kallsyms] [k] ebitmap_get_bit 0.01% ls [kernel.kallsyms] [k] mls_level_isvalid 0.01% ls [kernel.kallsyms] [k] find_next_bit The remaining ebitmap_get_bit() and find_next_bit() functions calls are made by other kernel routines as the new mls_level_isvalid() function will not call them anymore. This patch also improves the high_systime AIM7 benchmark result, though the improvement is not as impressive as is suggested by the reduction in CPU time spent in the ebitmap functions. The table below shows the performance change on the 2-socket x86-64 system (with HT on) mentioned above. +--------------+---------------+----------------+-----------------+ | Workload | mean % change | mean % change | mean % change | | | 10-100 users | 200-1000 users | 1100-2000 users | +--------------+---------------+----------------+-----------------+ | high_systime | +0.1% | +0.9% | +2.6% | +--------------+---------------+----------------+-----------------+", "target": 1}
{"idx": 605, "commit_message": "ia64/pv_ops: move down __kernel_syscall_via_epc. Move down __kernel_syscall_via_epc to the end of the page. We want to paravirtualize only __kernel_syscall_via_epc because it includes privileged instructions. Its paravirtualization increases its symbols size. On the other hand, each paravirtualized gate must have e symbols of same value and size to native's because the page is mapped to GATE_ADDR and GATE_ADDR + PERCPU_PAGE_SIZE and vmlinux is linked to those symbols. Later to have the same symbol size, we pads NOPs at the end of __kernel_syscall_via_epc. Move it after other functions to keep symbols of other functions have same values and sizes.", "target": 0}
{"idx": 516, "commit_message": "Merge pull request #222 from nsob1c12/improve_ca_error_message Fix support for areaDetector drivers without 'Multiple' image mode", "target": 0}
{"idx": 3294, "commit_message": "chore(reference): improve lookup for Di element make lookup of referencing DI element more efficient by only reverse traversing references which originate from a DI element. related to #CAM-1835", "target": 1}
{"idx": 730, "commit_message": "mini-os: Fix memory leaks in xs_read() and xs_write() xenbus_read() and xenbus_write() will allocate memory for error message if any error occurs, this memory should be freed.", "target": 0}
{"idx": 1556, "commit_message": "Manual update for dynamical matrix output", "target": 0}
{"idx": 2106, "commit_message": "[mac] ImageBuffer should create accelerated buffers for small canvases, but we shouldn't force them to create compositing layers [URL]/show_bug.cgi?id=107804 <rdar://problem/11752381> Reviewed by Simon Fraser. Make all canvases IOSurface-backed if requested, instead of having a size threshold under which we won't use accelerated canvas. Make requiresCompositingForCanvas take the size of the canvas into account, using the threshold which was previously in ImageBuffer to determine whether or not a canvas should be forced into a compositing layer. This improves canvas performance on some benchmarks [URL]/html5/javascript/QuadTree/examples/collision.html, for example) significantly, in cases where canvases which fall below the size limit (and thus are unaccelerated) are being drawn rapidly into either accelerated tiles or another accelerated canvas, by preventing excessive copying to/from the GPU. * platform/graphics/cg/ImageBufferCG.cpp: (WebCore): (WebCore::ImageBuffer::ImageBuffer): * rendering/RenderLayerCompositor.cpp: (WebCore::RenderLayerCompositor::requiresCompositingForCanvas):", "target": 1}
{"idx": 4056, "commit_message": "[PATCH] tutorials: Changed compressed ascii output to binary to\n improve IO performance\n\nalso rationalized the writeCompression specification", "target": 1}
{"idx": 2435, "commit_message": "Bug#17928281 'CHECK_PERFORMANCE_SCHEMA()' LEAVES 'CURRENT_THD' REFERRING DESTRUCTED THD OBJ Prior to fix, function check_performance_schema() could leave behind stale pointers in thread local storage, for the following keys: - THR_THD (used by _current_thd) - THR_MALLOC (used for memory allocation) This is an unsafe practice, which can potentially cause crashes, and that can cause other bugs when code is modified during maintenance. With this fix, thread local storage keys used temporarily within function check_performance_schema() are cleaned up after use.", "target": 0}
{"idx": 2911, "commit_message": "Improve performance and memory use * avoid accessing reactive properties in scanning process. This seems to put heavy pressure on memory. * pick lower \"ideal\" resoltion for cameras to make scanning frames less expensive. * reduce default scan interval so more frames can be scanned in less times.", "target": 1}
{"idx": 513, "commit_message": "[mle] remove existing MLE Data Response in queues when there is newer Network Data (#2479) This commit helps to reduce additional message exchanges when multiple network data updates in short time.", "target": 1}
{"idx": 4107, "commit_message": "[PATCH] Fixed performance regression on Kepler", "target": 1}
{"idx": 1895, "commit_message": "Merge pull request #131 from googlegenomics/staging-2 Initial v1beta2 -> v1 migration guide", "target": 0}
{"idx": 121, "commit_message": "Reduce the cost of platform/loader/fetch/resource.h This CL reduces the estimated pre-processed size of resource.h from 3.50MB to 2.35MB. cors_error_status.h, memory_pressure_listener.h, and resource_error.h are included by resource.h directly or indirectly. The changes in these files contributes to the resource.h reduction. Bug: 242216", "target": 0}
{"idx": 1548, "commit_message": "[PeepholeOptimizer] Look through PHIs to find additional register sources Reapply 243271 with more fixes; although we are not handling multiple sources with coalescable copies, we were not properly skipping this case. - Teaches the ValueTracker in the PeepholeOptimizer to look through PHI instructions. - Add findNextSourceAndRewritePHI method to lookup into multiple sources returnted by the ValueTracker and rewrite PHIs with new sources. With these changes we can find more register sources and rewrite more copies to allow coaslescing of bitcast instructions. Hence, we eliminate unnecessary VR64 <-> GR64 copies in x86, but it could be extended to other archs by marking \"isBitcast\" on target specific instructions. The x86 example follows: A: psllq %mm1, %mm0 movd %mm0, %r9 jmp C B: por %mm1, %mm0 movd %mm0, %r9 jmp C C: movd %r9, %mm0 pshufw $238, %mm0, %mm0 Becomes: A: psllq %mm1, %mm0 jmp C B: por %mm1, %mm0 jmp C C: pshufw $238, %mm0, %mm0 Differential Revision: [URL]/D11197 rdar://problem/20404526", "target": 0}
{"idx": 2447, "commit_message": "Merge branch 'u300-cleanup' of [URL]/pub/scm/linux/kernel/git/linusw/linux-stericsson into next/cleanup From Linus Walleij [URL]> This patch set does a number of cleanups and a minor improvement to U300, paving the way for single zImage and device tree: - Deprecate ancient platforms to make the following patches easier to make... - Move out one header to platform data and one to the mach-u300 proper to depopulate <mach/*> - Consolidate core machine files - Convert to sparse IRQs * 'u300-cleanup' of [URL]/pub/scm/linux/kernel/git/linusw/linux-stericsson: ARM: u300: convert to sparse IRQs ARM: u300: move DMA channel header into mach-u300 ARM: u300: delete remnant clkdev.h file ARM: u300: merge u300.c into core.c and rid headers pinctrl/coh901: move header to platform data dir pinctrl/coh901: retire ancient GPIO block versions ARM: u300: retire ancient platforms", "target": 0}
{"idx": 2357, "commit_message": "Fix performance problems in Matrix equality functions. Using the getColumnNRowM methods on both this and other is far slower than directly using the arrays.", "target": 1}
{"idx": 947, "commit_message": "ardana: Make the dns server IP addresses from the mgmt net available We need to set the dns servers in the ardana input model correctly so we need to get the nameservers from the network. Co-Authored-By: Ralf Haferkamp [URL]>", "target": 0}
{"idx": 2408, "commit_message": "Merge pull request #321 from coliff/patch-4 change to minified 'autocomplete.js' for improved performance", "target": 1}
{"idx": 615, "commit_message": "modified Q_SRC_ROOT = pwd for DATA_LOAD added performance testing for DATA_LOAD", "target": 0}
{"idx": 3599, "commit_message": "Set default mtu to 64K loopback current mtu of 16436 bytes allows no more than 3 MSS TCP segments per frame, or 48 Kbytes. Changing mtu to 64K allows TCP stack to build large frames and significantly reduces stack overhead. Performance boost on bulk TCP transferts can be up to 30 %, partly because we now have one ACK message for two 64KB segments, and a lower probability of hitting /proc/sys/net/ipv4/tcp_reordering default limit.", "target": 1}
{"idx": 1903, "commit_message": "documenting algorithm for standard tibetan", "target": 0}
{"idx": 2731, "commit_message": "Name: VMMaker.oscog-tfrel.1719 Author: tfrel Time: 9 March 2016, 10:35:56.474573 pm UUID: b09bd0a2-7d2a-41d2-9794-b83f2881c59e Ancestors: VMMaker.oscog-eem.1718 Improve the performance of BitBltSimulator by using += rather than + and new assignment for the src and dest index pointers if they are CObjectAccessors. This avoids creating many copies of CObjectAccessors in the inner BitBlt loops, and thus improves simulation performance dramatically. For the generator, the code is the same as previously, but now in an inlined method, so nothing changes in C-land.", "target": 1}
{"idx": 2856, "commit_message": "07/27/2016: Made good progress on the new SPICE lexer; cleaner and more efficient. Just about complete", "target": 1}
{"idx": 3925, "commit_message": "supported possiblity of item substitute (fixed bug) in setup_fields more efficient reference creation fixed table_name of Field in temporary table", "target": 1}
{"idx": 3048, "commit_message": "Make extraction faster (#186) * Added a no_parallelization mode, to be used while testing. Added a main function for one unittest, to be used in profiling studies * Small changes to the peak finder function that increases the performance on my sample up to 20 % * Refactored the cwt coefficients function completely. Another 20 % performance boost... * Another performance boost in the symmetry looking function. DataFrames are evil * Again, not use pandas Series. Another boost of 10%... * Use numpys optimized calculations * Make the symmetry_looking function an apply function to cache results * Refactored the AR function and increased its performance by a large factor * Merged the serial and the per_kind extraction function * Removed the in_parallel from the name as it doed not make much sense anymore * Moved the try..except to the place it belongs to", "target": 1}
{"idx": 522, "commit_message": "Bring back the Connect to dialog. It still needs some work, but it's 2008-02-21 Vincent Untz [URL]> \tBring back the Connect to dialog. It still needs some work, but it's \tbetter than nothing. \t* libnautilus-private/nautilus-bookmark.c: (nautilus_bookmark_new): \tActually save the name in the bookmark, instead of forgetting it. \t* src/Makefile.am: Updated to build the connect dialog stuff. \t* src/nautilus-connect-server-dialog-main.c: (show_uri), \t(nautilus_connect_server_dialog_present_uri), (main): \tPort to gio. We use g_app_info_launch_default_for_uri() to open the \tURI, but it will need some more work because it doesn't automount the \tURI. \t* src/nautilus-connect-server-dialog-nonmain.c: \t(nautilus_connect_server_dialog_present_uri): Trivial update. \t* src/nautilus-connect-server-dialog.[ch]: (get_method_description), \t(nautilus_connect_server_dialog_finalize), (connect_to_server), \t(response_callback), (setup_for_type), (display_server_location), \t(nautilus_connect_server_dialog_init), \t(nautilus_connect_server_dialog_new): Port to gio. Add bookmark saving \tfeature, to replace the old gnome-vfs network volumes. Remove the \tBrowse button, which isn't really needed there. Needs some more polish. \t* src/nautilus-shell-ui.xml: Uncomment the \"Connect to\" action \t* src/nautilus-window-menus.c: (action_connect_to_server_callback): \tUncomment code to make use of the dialog svn path=/trunk/; revision=13797", "target": 0}
{"idx": 3843, "commit_message": "use array instead of int map for itemTags (#1357) The itemTags field of the index is only used for findKeys and findValues. In the findKeys case it loops over all keys and the simple array is faster. For the findValues case, it gets the value for a key which is now linear for the number of tags so will be a bit slower. The main benefit is it can reduce the memory use considerably for large indexes. The IntIntHashMap has an overhead of 40 bytes and also allocates an array of 2x the size to avoid collisions in the map. With a sample index of 10M items this change reduced the memory size by several GB.", "target": 1}
{"idx": 3232, "commit_message": "Merge pull request #5 from reducedb/master optimized cp(), 2x performance increase for large files", "target": 1}
{"idx": 508, "commit_message": "improved loggin and data emitting", "target": 0}
{"idx": 799, "commit_message": "cleanup mobile styles, matching style improvement, some ajax fixes", "target": 0}
{"idx": 1529, "commit_message": "Reorganized the Index and Results page scripts to take better advantage of typescript. The file initialize.ts contains some extensions to the built in types. Improved the reliability of checking if a member exists by using an explicit has own property check. Documented javascript and typescript code. I need to make the generated javascript file initialize.js load on all of the pages.", "target": 0}
{"idx": 3343, "commit_message": "Update conversation style. 1) No more blue/green for outgoing messages. Just lock or no lock. 2) Use 9-patches instead of shapes for better bubble performance. 3) Use tinting rather than different colored assets. 4) Change outgoing status indicators so that they don't change width of the bubble as they update. 5) Switch to using ..., check, double-check for pending, sent, delivered rather than using bubble tone to indicate pending. // FREEBIE", "target": 1}
{"idx": 4009, "commit_message": "[PATCH] Reworked function 'get_best_weight()' of Slivers_exuder.h\n\n- Avoid computing incident cells multiple times\n- Make it more efficient for P3M3 by not having to use\n tr.min_squared_distance() to compute the distance\n between neighboring vertices", "target": 1}
{"idx": 2980, "commit_message": "ALSA: hda-intel: Add position_fix quirk for ASUS M2V-MX SE. With PulseAudio and an application accessing an input device like `gnome-volume-manager` both have high CPU load as reported in [1]. Loading `snd-hda-intel` with `position_fix=1` fixes this issue. Therefore add a quirk for ASUS M2V-MX SE. The only downside is, when now exiting for example MPlayer when it is playing an audio file a high pitched sound is outputted by the speaker. $ lspci -vvnn | grep -A10 Audio 20:01.0 Audio device [0403]: VIA Technologies, Inc. VT1708/A [Azalia HDAC] (VIA High Definition Audio Controller) [1106:3288] (rev 10) Subsystem: ASUSTeK Computer Inc. Device [1043:8290] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 17 Region 0: Memory at fbffc000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: HDA Intel [1] [URL]/mailarchive/forum.php?thread_name=1265550675.4642.24.camel%40mattotaupa&forum_name=alsa-user", "target": 1}
{"idx": 482, "commit_message": "Improve display of some quoting", "target": 0}
{"idx": 3231, "commit_message": "MAGETWO-55729: [Customer] Optimize performance for bundled products with lots of product options - FixedBundleTests refactoring", "target": 1}
{"idx": 3431, "commit_message": "Performance improvement using in-operator on dicts Just a small cleanup for the existing occurrences. Using the in-operator for hash lookups is faster than using .keys() [URL]/questions/29314269/why-do-key-in-dict-and-key-in-dict-keys-have-the-same-output", "target": 1}
{"idx": 1561, "commit_message": "There is an issue in GPUImageMovie - when instance adds itself as AVPlayerItemVideoOutput delegate: [playerItemOutput setDelegate:self queue:videoProcessingQueue], playerItemOutput doesn't release its delegate when GPUImageMovie instance is deallocating. Later, this results in a method call from the deallocated object (outputSequenceWasFlushed:) and you get the crash. This was found with help of NSZombie detector and I fixed it by adding this in GPUImageMovie dealloc method: [playerItemOutput setDelegate:nil queue:nil];", "target": 0}