id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
830927763
questions about routinator syncs certificates hello, I installed routinator on my Ubuntu system,and try to use it to synchronize RPKI repositories. I found that it takes about 10 minutes to synchronize the entire RPKI repositories and verify the certificates,is 10 minutes normal? I'd like to know how long this software will take to synchronize all RPKI repositories and validate all certificates respectively. Whether the software will traverse all repositories during an update cycle? Is it traversing all of the RPKI certificates from the five RIRs' root certificates (default trust anchor) to leaf certificates to synchronize RPKI repositories? Typically the first validation run will take between 10 and 20 minutes, so what you're seeing is quite normal. In subsequent runs only the deltas will be processed so that is much quicker, typically around a minute or two. Relying party software will connect to the five RIR trust anchors and traverse the entire RPKI tree, which currently consists of about 25 individual repositories. That means fetching data is the slowest part of the validation process. You can learn more about this on rpki.readthedocs.io or discuss experiences on our mailing list.
gharchive/issue
2021-03-13T15:53:40
2025-04-01T06:37:16.834906
{ "authors": [ "AlexanderBand", "syysumuro" ], "repo": "NLnetLabs/routinator", "url": "https://github.com/NLnetLabs/routinator/issues/479", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
125880578
Event Log not creating custom folder I have a requirement where in we want our event logs to be written to a custom folder in Event Logs, however when i am specifying layout in NLog.config file it is directing it to Application folder. Looked into source code , and looks like this is how it is implemented. My exact requirement is to log events from my application is below manner: Any pointers from any one on how to achieve it. Thanks in advance. Uploaded image again...Tx Your image does not displayed. Could you upload it again? Are you looking for something like this: http://stackoverflow.com/questions/5171155/creating-an-application-and-event-logs-in-the-applications-and-services-logs-s Yes something like this but i want to write logs to it using NLog logger and want to set up the same using nlog config file. You need to specifiy source and log in the configuration. Event Log Target wiki page can guide you to do this. To find out which value is required, you can look at the MSDN page. Because source and log values are the same thing that defined MSDN page Please try and inform us. If it will not work for you, I will take a look again. Yup using the guidelines specified on wiki page, i am not able to get the folder i have mentioned in first post, also i specify layout of message in nlog config file it ignores the value given in "log" configuration and directs it to application folder in event log. This is the config file value i have set: Could you post your full NLog config at here? <!-- add your targets here See https://github.com/nlog/NLog/wiki/Targets for possible targets. See https://github.com/nlog/NLog/wiki/Layout-Renderers for the possible layout renderers. --> <!--Writing events to the a file with the date in the filename.--> <!--size 100 kb and max archive files can be 2--> <target name="eventlog" xsi:type="EventLog" machineName="." source="mysource1" log="mylog1" layout="${longdate} | ${guid} |${logger} | ${level} | ${message} | ${exception:format=ToString}"/> <!--Write all events with minimal level of Debug (So Debug, Info, Warn, Error and Fatal, but not Trace) to "f"--> <logger name="*" minlevel="Debug" writeTo="eventlog" /> @UgurAldanmaz are you looking at this? bump @UgurAldanmaz @dev-4488 I guess this is still an issue? Yes it is, any pointers on how to fix it or when it will be fixed?? I will look at this, but I need some more info to understand the issue. As far I understand you would like to write to "mylog1/operation" (logname), is that correct? So if you set target name="eventlog" xsi:type="EventLog" machineName="." source="mysource1" log="mylog1/operation" layout="${longdate} | ${guid} |${logger} | ${level} | ${message} | ${exception:format=ToString}"/> that won't work? Yes you are correct, because if you specify Layout for an event log target it will start putting logged events information in default folder i.e. Application. I'm not sure if I fully understand you. But you like to use a Layout for the log name? e.g. <target log="${test}" > Nope, i am using below xml to write to event logs: <target name="eventlog" xsi:type="EventLog" machineName="." source="mysource1" log="mylog1" layout="${longdate} | ${guid} |${logger} | ${level} | ${message} | ${exception:format=ToString}"/> However the log information is going to Application folder in Event Viewer rather than to a custom folder "mylog1". Ah, so this is https://github.com/NLog/NLog/issues/950 ? Yes.... :) is there any solution for it ?? Well I can't reproduce it. But if you willing to help, you can test if you can write to the log just with some c# statements (without nlog) . If that's working, we will move the working code into NLog. Closing this as it's a duplicate of #950
gharchive/issue
2016-01-11T06:16:25
2025-04-01T06:37:16.849352
{ "authors": [ "304NotModified", "UgurAldanmaz", "dev-4488" ], "repo": "NLog/NLog", "url": "https://github.com/NLog/NLog/issues/1151", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1828709869
Replaced MutableUnsafeAttribute with ThreadAgnosticImmutableAttribute [ ] Waiting for NLog v5.3 I don't like the word "unsafe", and since the attribute is just a restriction of the original ThreadAgnosticAttribute, then I think [ThreadAgnosticImmutable] is better (Only ThreadAgnostic when LogEvent-state is immutable) It could be interesting if the layout precalculate was a little smarter. Right now if having a JsonLayout that includes ThreadId, then the entire JsonLayout must be precalculated by the application-thread, and cannot be deferred to async-background-thread. It could be neat if JsonLayout Precalculate was to recognize that less than 5 SimpleLayout needed to perfrom Thread-context-capture, then instead of rendering the entire Json-Document on the application-thread. Then it would "just" precalculate those 5 SimpleLayout-results. It could also be interesting if the JsonLayout was able to recognize whether it should precalculate, when it detects that it must IncludeEventProperties = true, but only if the LogEventInfo-properties are "volatile" / "not-immutable". This should be combined with the coming optimization where NLog.Extensions.Logging doesn't provider format-Parameters-array, but only the LogEvent-properties-collection.
gharchive/pull-request
2023-07-31T08:53:08
2025-04-01T06:37:16.852210
{ "authors": [ "snakefoot" ], "repo": "NLog/NLog", "url": "https://github.com/NLog/NLog/pull/5297", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1702356351
[5pt] Stage-Based CatFIM bug where the branch 0 dem_adj_elevation is used A bug is occurring in Stage-Based CatFIM where the wrong dem_adj_elevation value is being used. The goal is to use the value from the branch that is not branch 0, but there is a flaw in the logic. Related to Vlab 115987. Current behavior Overinundating some reaches, such as LDYN6. Expected behavior Correct inundation. Screenshots The bug occurs in generate_categorical_fim.py on line 331. The wrong dem_adj_elevation value is being used because of a flaw in the pandas logic. I addressed the bug for site LDYN6. Screenshot below. Will now do a full run.
gharchive/issue
2023-05-09T16:23:12
2025-04-01T06:37:17.173506
{ "authors": [ "BradfordBates-NOAA" ], "repo": "NOAA-OWP/inundation-mapping", "url": "https://github.com/NOAA-OWP/inundation-mapping/issues/897", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
935794810
Add ADCP Scal and EIN EIN could be combined with VEL in many cases what about if a depth has been qc'd out of one? SCAL can be added to all pretty easily but as a seperate dataset (tabular) There is already a "table" that reads all .nc files in the directory and puts it in a tabular format... however, it grabs all depths from the gridded data too which is unnecessary. Modify to get just SCAL new CF flavored netcdf files starting 2020 will have this
gharchive/issue
2021-07-02T13:56:10
2025-04-01T06:37:17.175736
{ "authors": [ "shaunwbell" ], "repo": "NOAA-PMEL/EcoFOCI_AutoAnalysis", "url": "https://github.com/NOAA-PMEL/EcoFOCI_AutoAnalysis/issues/108", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
225985145
Event text going outside of the box https://github.com/NPBruce/valkyrie-store/issues/1 X Den 3 maj 2017 15:26 skrev "NPBruce" notifications@github.com: NPBruce/valkyrie-store#1 https://github.com/NPBruce/valkyrie-store/issues/1 — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/NPBruce/valkyrie/issues/378, or mute the thread https://github.com/notifications/unsubscribe-auth/ABZ0HuUQwxTj5qAJyHnscts6miZIcPedks5r2IB4gaJpZM4NPZRm .
gharchive/issue
2017-05-03T13:26:16
2025-04-01T06:37:17.183394
{ "authors": [ "MrCraigen", "NPBruce" ], "repo": "NPBruce/valkyrie", "url": "https://github.com/NPBruce/valkyrie/issues/378", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1111512504
Table row shading with wxWidgets 3.1.5 Mentioned in https://github.com/NREL/wex/pull/135 and SAM #883 windows-display-issues-from-paul.pdf Fixed with wex pull requests 138 and 139 https://github.com/NREL/wex/pull/138 https://github.com/NREL/wex/pull/139
gharchive/issue
2022-01-22T12:14:49
2025-04-01T06:37:17.239787
{ "authors": [ "sjanzou" ], "repo": "NREL/SAM", "url": "https://github.com/NREL/SAM/issues/896", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1290280919
Update data JSON from Haystack v3 to Haystack v4 Update the input data configuration JSON to Haystack version 4. Affects... Data intake Python code (maybe? TBD) tests/data/README.md (describes format) tests/data/Synthetic Site/Synthetic Site Config.json Updating all SkySpark-related issues to high priority
gharchive/issue
2022-06-30T15:12:47
2025-04-01T06:37:17.242475
{ "authors": [ "stephen-frank" ], "repo": "NREL/Wattile", "url": "https://github.com/NREL/Wattile/issues/107", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2017595187
Feature/update reqs and add simple test This pull requests adds/improves some software components ahead of the v1 changes. It addresses: Issue #34 and Issue #31 , changes include: Removing unused requirements Adding semantic versioning using compatible release specifier Adds a first pytest, which just declares an empty py-sim but can test everything imports Adds a continuous integration workflow for automatic testing Removes un-used requirements.txt Adds vs-code items to gitignore Pulling to main to cause the ci to go into effect and can back merge to develop after Pull request closes #34 and closes #31 I ran the tests locally (and I think this pull request ran the tests too), but yeah, might also be different after merging...
gharchive/pull-request
2023-11-29T23:15:21
2025-04-01T06:37:17.254200
{ "authors": [ "paulf81" ], "repo": "NREL/hercules", "url": "https://github.com/NREL/hercules/pull/36", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
247476866
Add an est_time plan tool eta_time(scan([det], motor, 1, 10, 10)) would be cool. This would be a simulator that uses whatever it can find out about the hardware to estimate how long a plan would take to run and, say, return a number of seconds. It would not touch any hardware. This is an old one, but want to add my encouragement. This would be really useful for us, it would of course probably assume a fixed time for motor moves which wion't be so accurate but a great start. Can you help us brainstorm what we would need to know from the hardware, to first order? In my mind, a "zeroth-order solution" would make blunt assumptions like "All motors take 1 second to move, period." I imagine we would need: movable hardware to report a an ETA to a given position like motor.eta(5) or temp.eta(273) detectors to consistently exposure their exposure time through a consistently-named attribute, and (better yet) give us some estimate of the overhead time (triggering time - acquiring time) Maybe we should develop software loops for empirically measuring these things and storing the relevant data in a cached file somewhere. I agree getting an accurate measure of the time taken for the motors is difficult. For the detectors, almost all have a 'aquisition period' variable and this could be mapped to a consistently named attribute in the setup file, or if the variable is not present then a estimate could be hard-wired. For the motors, a standard motor record from a delta tau has a 'motor velocity' attribute (in units/s) already, which can be used to give a time estimate for a move of x units. Non-motor record motors may need to introduce a hard-wired estimate like the detectors without aquisition time in the setup. Just thought I’d add, I am happy to discuss in person if it helps, just throw something on my calender We can't prioritize this until after our "1.0" deadline (a week from Wednesday) but I think we can draft something usable not long after that. So I will wait to progress much on this until Dan is back. But this morning I did some testing at SIX in order to see if we could get an accurate eta for a 'move' and for a 'count'. I think it is possible but a little more work needs to be done. for 'move': using the motor.velocity and motor.settle_time, along with the distance to move runs into an issue in that there is a on defined 'settle time' built into the motion that depends on the travel distance. if I determine this extra 'settle' time empirically and add it to the eta then for all moves of this distance I get an eta to within 1% (longer moves have reduced error). I tested with several travel distances (with different extra settle times), and about 10 motors (each with there own extra settle time values) and it does seem accurate to within 1%. This extra settle time is clearly observed in CSS (watching the move indicator) so is not associated with any bluesky/ophyd overheads. I will talk to John Sinsheimer and see if this extra settle time can be determined a-priori or not. for 'count': using the det.aquire_time gives a pretty poor indication in itself ( ~ 10%). The remaining error is overhead from count (a fixed value independent of the number of points ~ 3.6s, detector independent) and from the individual reads (a fixed value depending on the number of counts ~ 0.275s, detector independent). The first is probably associated with the stage and unstage, while the second is associated with the time lag between asking for a read and processing the liveplot/livetable update. I can see from the CSS page that these values are not related to he detector acquisition time, and they do not vary with acquisition time. My thoughts on this is that we could get more accurate over time with these values by calculating, and storing, these time lags after each stop document (or similar). What are peoples thoughts? I have an update on the motor issue above regarding the extra settle time that is related to the distance moved. After speaking with John Sinsheimer it appears this was a setup mistake in the delta tau. After John fixed the issues I did more testing, and the settle time and velocities now do a good job of estimating the time (< 1% accuracy). There are a few outliers, where the PV velocity differs from the measured velocity resulting in 1-50% time errors, but these are bought under control by using the measured velocity. And the final outlier case are the virtual motor axes, where it is impossible without knowing the motion equations between real and virtual motions to accurately estimate the time to move. So I think the solution here is to take the PV velocity as a starting point, but to measure and average after each move of an axis a velocity which will make the estimating tool more accurate as time goes on. So a quick update on this, the summarize_plan analogy is written, and undergoing testing. We are working on the lower level stuff. A preliminary set of code to do this is presented in PR #1048 and NSLS-II/Ophyd PR # 567 We plan to do this, but the feature is too complex to be designed via GitHub Issues. We will write up a Bluesky Enhancement Proposal to lay out the planned design.
gharchive/issue
2017-08-02T18:14:32
2025-04-01T06:37:17.277275
{ "authors": [ "awalter-bnl", "danielballan" ], "repo": "NSLS-II/bluesky", "url": "https://github.com/NSLS-II/bluesky/issues/713", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
204684461
Data storage server down This is a urgent issue. The GPFS server is down. Bluesky scan engine couldn't access to storage directory, we cannot run any scan now. Petkus is aware of the problem and is working on it. -matt On Wed, 1 Feb 2017, xiaojinghuang wrote: This is a urgent issue. The GPFS server is down. Bluesky scan engine couldn't access to storage directory, we cannot run any scan now. ? You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.[AFOf42aiYfuvF1gwMMQSf8qc_mAhm_qrks5rYOBdgaJpZM4L0TEi.gif] Can you access the GPFS server via any other method on the workstation? Can you IOCs see the GPFS? My guess is not as nothing at the python layer knows about GPFS, it is simply a path in the file system. I also do not think that the RE directly writes to the GPFS, the detector IOCs should be doing that them selves. You should contact someone from Rob's group about this. The server is back online now. Thanks a lot.
gharchive/issue
2017-02-01T19:43:25
2025-04-01T06:37:17.281808
{ "authors": [ "cowanml", "tacaswell", "xiaojinghuang" ], "repo": "NSLS-II/ophyd", "url": "https://github.com/NSLS-II/ophyd/issues/382", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1570287018
Different Results when I do inference on onnx file using TRT engine vs inference using pth file I am able to train and validate using a custom dataset (bunch of npy files and corresponding annotations). I am using pointpillars model. The results look accurate on the point cloud using the demp.py file (visualization using pth) For my eval example 000000.npy, I get the following results using demo.py: pred_dicts[0]['pred_scores'] tensor([0.8475, 0.4954, 0.4725, 0.4603], device='cuda:0') pred_dicts[0]['pred_boxes'] tensor([[ 9.6669, 1.1732, 2.1426, 0.2856, 0.5018, 3.1349, 6.2874], [ 9.8581, -10.6740, 2.0632, 0.4447, 0.4504, 2.5857, 6.2749], [ 24.9824, -10.4977, 3.1227, 0.2673, 0.4696, 3.1857, 6.2983], [ 24.8274, 1.3483, 2.7095, 0.2326, 0.4953, 3.1487, 6.3119]], device='cuda:0') However, when I do inference using the generated onnx file using the TRT engine, I get the following: 50.8117 -6.56379 1.9568 0.256044 0.501621 2.79036 6.28261 0 0.860124 48.4368 -14.8604 2.06768 0.442814 0.450666 2.58379 6.27543 0 0.499151 42.9857 -14.6852 3.1227 0.267265 0.469636 3.18565 6.29831 0 0.472531 45.4026 -6.38253 2.70948 0.232621 0.495276 3.14874 6.31186 0 0.460299 After I analyzed the numbers, I see the confidence numbers are almost the same. Even the box dimensions are same across both. Even the z dimensions match. However, I see a huge disparity in x and y coordinates. Can someone please help me out? I have attached another comparison as well: Hey, can someone please help me out with this? I am using npy files for TFRT inference instead of bin files. Not sure if that is causing any issue? When I compared the .npy data being loaded in python vs data loaded in the main.cpp, I see 32 additional bytes when the main.cpp file loads the same npy file. I accounted for this but that did not fix my issue, unfortunately! @byte-deve , I would really appreciate your input here.
gharchive/issue
2023-02-03T18:42:57
2025-04-01T06:37:17.308378
{ "authors": [ "Allamrahul" ], "repo": "NVIDIA-AI-IOT/CUDA-PointPillars", "url": "https://github.com/NVIDIA-AI-IOT/CUDA-PointPillars/issues/82", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1349514232
[Task] Run Merlin notebook tests from Systems repo Description When we make changes in systems we will want to run the example notebooks in the Merlin repo using the current branch of systems. This will ensure that any changes to systems will not break the end-to-end demos/integration tests, which has been an issue lately and came up in our retrospective. The process will be: Create a python venv using tox Check out the main branch of Merlin and install Check out the PR branch of systems and install Run Merlin integration tests (notebooks) Since the notebooks include making inference on triton, this will have to run on the Jenkins machine, not in the CPU-only Github actions test. First PR towards this was getting tox working on jenkins: https://github.com/NVIDIA-Merlin/systems/pull/180 Finished here: https://github.com/NVIDIA-Merlin/systems/pull/183
gharchive/issue
2022-08-24T14:11:48
2025-04-01T06:37:17.323075
{ "authors": [ "nv-alaiacano" ], "repo": "NVIDIA-Merlin/Merlin", "url": "https://github.com/NVIDIA-Merlin/Merlin/issues/557", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1012590309
[Task] Add Tensorflow version for getting-started and end-to-end example notebooks Description [x] Tensorflow support and example notebooks were asked by several people at the RecSys'21. Need to add example notebooks for that. [x] Test if currently we can serve a session-based model trained with Tensorflow to Triton. closing with https://github.com/NVIDIA-Merlin/Transformers4Rec/pull/341
gharchive/issue
2021-09-30T20:25:14
2025-04-01T06:37:17.324920
{ "authors": [ "rnyak" ], "repo": "NVIDIA-Merlin/Transformers4Rec", "url": "https://github.com/NVIDIA-Merlin/Transformers4Rec/issues/258", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
67146476
caffe BGR issue hey, After I trained the model successfully, I tried to test one image, and I found it does not do channel swap. So I am a little confused. Here is my understanding: Inside caffe, if you let caffe do decoding on the image, it will always be BGR format(by opencv). And I did not see any code inside digits, which will convert loaded images into BGR mode. Therefore, digits create db -----> with encoded images caffe train the db ------> with BGR format digits test one image -------> with RGB format So the final result won't work as expected. Did I miss anything? Rats. I intentionally avoided swapping channels because it's annoying and because I want the datasets to be consumable by different deep-learning frameworks which don't require the channel swap. It was all working fine until I added the JPEG compression. I think the right thing to do is to do the channel swap at inference time only if the dataset was encoded and chalk this up to a caffe "bug". Thanks very much for bringing this to my attention, @dxj19831029.
gharchive/issue
2015-04-08T14:45:10
2025-04-01T06:37:17.341111
{ "authors": [ "dxj19831029", "lukeyeager" ], "repo": "NVIDIA/DIGITS", "url": "https://github.com/NVIDIA/DIGITS/issues/52", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
151679431
About manual pre-processing. I trained my network in DIGITS. And deploy it my c++ framework with DIGITS caffemodel. And result is 1.X % worse than DIGITS result. I think reason is in the image preprocessing step. I used gray image. resize it by cv::resize() {here, default option is INTER_LINEAR and DIGITS use image = scipy.misc.imresize(image, (height, width), 'bilinear') is it different?} I used mean image subtraction. so load mean.binaryproto. And resized image convert to CV_32FC1. And subtract them. I think problem can occur at resize step. And if it is problem, I want know how can I implement c++ classifier with DIGITS caffemodel. Sorry for the slow response! If you're using a different resizing method then the model performance will almost certainly not be "worse" - you'll just get different results for the same images. It sounds like you already have a C++ classifier with a DIGITS caffemodel - you don't really need to match DIGITS's output in order to have a working system. However, it's nice to be able to verify that you're not doing anything wrong. So let's see if we can figure out your problem. From OpenCV's documentation: INTER_LINEAR - a bilinear interpolation (used by default) That sounds like it should be the same as scipy's "bilinear" to me. So I doubt the resizing algorithm is your problem. And resized image convert to CV_32FC1. My guess is that you are converting your image to BGR, while the mean image is in RGB (or vice versa). Try using the classification example under /examples/classification/ and see if you can reproduce the results there. If so, check these functions to see how DIGITS resizes images and how Caffe's caffe.io.Transformer gets set up.
gharchive/issue
2016-04-28T15:55:53
2025-04-01T06:37:17.345693
{ "authors": [ "lukeyeager", "oanoelsis" ], "repo": "NVIDIA/DIGITS", "url": "https://github.com/NVIDIA/DIGITS/issues/715", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1465572109
question about data files for the xlnet example Where could I download data files for running the example ? Thanks. int main(int argc, char** argv) { string input_name = "./data/data.npz"; string model_name = "./data/model.npz"; string check_name = "./data/output.npz"; if (argc != 11) { printf("[ERROR] ./bin/xlnet_correctness_example batch_size num_layers seq_len " "head_num size_per_head num_token input_name model_name check_name " "data_type 0: fp32, 1: fp16, 2: bf16\n"); printf("e.g., ./bin/xlnet_correctness_example 8 12 128 12 64 32000 " "./data/data.npz ./data/model.npz ./data/output.npz 0\n"); return 0; } These files are prepared in these steps: https://github.com/NVIDIA/FasterTransformer/blob/main/docs/xlnet_guide.md#verify-the-correctness
gharchive/issue
2022-11-27T20:50:31
2025-04-01T06:37:17.347504
{ "authors": [ "byshiue", "zjin-lcf" ], "repo": "NVIDIA/FasterTransformer", "url": "https://github.com/NVIDIA/FasterTransformer/issues/374", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1957600278
Very limited LoRA Functionality So I noticed that we can add a single LoRA to the TRT model. Problems here are: It is only ever a single LoRA at any given time, and you have to build a new TRT model for each. The implemented LoRA always seems to be at full (1.0) strength. This seems to be a major downside currently as this heavily limits what I can do with any given model. As long as I am just using baseline SD checkpoints this is no problem, but as soon as I introduce a LoRA to the generation process, this becomes messy, and depending on the LoRa, unusable. As soon as I want to have multiple LoRAs, it basically is impossible. I'm working on improving this. Currently, we require an ONNX model containing the applied weights to refit the engine just in time. And in the short term, I don't see a workaround for this..... For now, there are two options: Use the default LoRA embeddings and export the ONNX model JIT. This takes approx. 40s for SD1.5 model. Extend the current LoRA exporter to support multiple LoRA and strength. I understood the second option, but not the first. What do you mean "default LoRA embeddings"? By default a checkpoint has no embedding, so what specifically did you mean by default, and how does that relate to variable LoRA strength configuration when fusing the checkpoint and LoRA weights? As a side-note, fantastic work on this extension Luca, much appreciated! Your side-project has led to a sale of 2 x 4090 cards and a 3080 already, so I hope your employer allows you some paid time to work on this project :) What do you mean "default LoRA embeddings"? I am not sure what the correct terminology would be, but I intended the <loraName:Scale> syntax in the prompt. After some more thought, I think this is my preferred way of doing things: LoRA checkpoints still need to be exported through the TensorRT tab. Export in this case means layout and weight transformation, so no TensorRT compile for each LoRA checkpoint. At inference, the <loraName:Scale> syntax can be used (given the LoRAs you want to use have been exported) I appreciate the positive feedback! When does the TensorRT compilation happen in the above two-step procedure? The compilation happens during the export of the base model. I'm working on a POC right now to see how feasible this is. If everything goes smoothly (which it never does) I could have something in a few hours hacked together. LoRA checkpoints still need to be exported through the TensorRT tab. Export in this case means layout and weight transformation, so no TensorRT compile for each LoRA checkpoint. and The compilation happens during the export of the base model. Does this mean that only the base model (checkpoint) needs to be TRT compiled, and that you can use the lora ranks and weights as-is without creating TRT variants of them? The statement "no TensorRT compile for each LoRA checkpoint" led me to believe the latter. Or I may have misunderstood, and you meant there could be two separate compiles, one for the base checkpoint and another for the LoRA ranks and weights separately. Either way, you seem to indicate there may be a way around the L x W x C (#loras x #weights-for-lora-fusing x #base-checkpoints) combinatorial explosion of number of exported onnx and trt models the current scheme requires, so am I interpreting you answer to mean there may be a way to fuse the lora weights (be they compiled or not) with the compiled base checkpoint "engine" dynamically (at inference time)? Does this mean that only the base model (checkpoint) needs to be TRT compiled, and that you can use the lora ranks and weights as-is without creating TRT variants of them? Yesn't - Long explanation: The engine export consists of two steps: ONNX export TensorRT engine compilation TensorRT requires ONNX as an intermediate representation to lower the graphs IR. Therfore, we cannot leverage torch checkpoints directly, but need to lower them through ONNX. This also applies to LoRA checkpoints. Therefore LoRA checkpoints still need to be exported. But this only requires step 1 (ONNX export) rather than the actual compilation. We'll leverage the compiled base models and apply LoRA then. Here is an end-to-end example assuming you start from scratch: I have SD 1.5 installed and two random LoRAs (BarbieCore, pk_trainer768_V1) I need to export the base model (SD1.5) to TensorRT (ONNX Export + Compile) I need to export my LoRA checkpoints (ONNX Export only) From here LoRA should work as (native) in the UI using prompts like: <lora:BarbieCore:0.7> <lora:pk_trainer768_V1:1> A pixel image of a man with a sword Barbiecore I hope this clarifies things, and isn't more confusing than before :D Here is a screenshot of how it looks like at inference. This also allows the SD Unet dropdown to be set to automatic. Limitations Applying the LoRA is pretty slow at the moment (~10s). But when using the same scales and loras this is being cached. There is probably still a ton of bugs LoRA TensorRT format is not as filesize efficient I pushed my PoC to lora_v2 in case you dare to test it. I pushed my PoC to lora_v2 in case you dare to test it. I did, and it worked flawlessly. My hat's off to you. Awesome job! Applying the LoRA is pretty slow at the moment (~10s). But when using the same scales and loras this is being cached. Yes, but unless someone rotates a bunch of different LoRAs in and out constantly in the XYZ plot, it shouldn't be a problem. The initial load of a new lora took about what you describe, but once loaded, I could tweak the weights without that initial pause. Started rendering instantly. The main use case for varying weights a lot is likely the XYZ plot, and I'm happy to report that it iterates over different weights in the same lora(s) at full throttle. I'm still a bit perplexed that there was no discernable performance impact (regression), since only the base checkpoint is TRT compiled, and the LoRA ranks and weights seem not to (unless you do on-the-fly compilation during loading at inference time). Haven't had time to look at the code change yet. For anyone else wanting to use this: Go into the extension directory and check out the lora_v2 branch. Delete any fused checkpoint + lora tensorrt and ONNX files that you may have created earlier for the combinations. You can keep the ONNX and TRT models for the base checkpoint itself, since the code change doesn't seem to affect those. Apply the LoRAs as usual with SD-webUI, and enjoy the 2x rendering speedup. I am happy to hear it works on a machine other than mine :D I also tested LyCORIS, and for me, it seemed to have worked fine. One general disclaimer: This is a work in progress, and there might be breaking changes before it finds its way into the main branch. On the fly Lora... IT WORKS!!! With my limited testing on 3 different loras, here is my feedback: The loras have small amount of effect on the generated images. That is, the loras seems to work, but they are not as effective as when i use them without TensorRT. Here is an example: link I do not know if there is an issue from my side, cause I do get an error when trying to generate images with TensorRT + Lora. Here is the error I get: Apllying LoRAs: ['lora:TheRockV3:1']███████████████████████████████████████████████████| 30/30 [00:04<00:00, 7.89it/s] *** Error running process: C:\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py Traceback (most recent call last): File "C:\stable-diffusion-webui\modules\scripts.py", line 623, in process script.process(p, *script_args) File "C:\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 191, in process self.get_loras(p) File "C:\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 229, in get_loras modelmanager.available_models()[lora_name][0]["filepath"], KeyError: 'TheRockV3' --- Loading TensorRT engine: C:\stable-diffusion-webui\models\Unet-trt\realisticVisionV51_v51VAE_f4744654_cc86_sample=1x4x64x64+2x4x64x64+8x4x96x96-timesteps=1+2+8-encoder_hidden_states=1x77x768+2x77x768+8x154x768.trt Loaded Profile: 0 sample = [(1, 4, 64, 64), (2, 4, 64, 64), (8, 4, 96, 96)] timesteps = [(1,), (2,), (8,)] encoder_hidden_states = [(1, 77, 768), (2, 77, 768), (8, 154, 768)] latent = [(-1946122960), (-1946120097), (-1946121115)] 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:03<00:00, 8.04it/s] Total progress: 100%|██████████████████████████████████████████████████████████████████| 30/30 [00:03<00:00, 7.95it/s] Total progress: 100%|██████████████████████████████████████████████████████████████████| 30/30 [00:03<00:00, 8.44it/s] +1, I've also managed to get it working. Amazing stuff, now I need only ControlNet to completely move over to TensorRT. I started with a fresh installation with only this extension and besides my incompetence, there were no issues. The resulting image is the exact same with and without TensorRt unet enabled. Do you think there would be major work needed before this can be merged? Go into the extension directory and check out the lora_v2 branch. I'm still rather new to anything github. How do I switch branches? Can't seem to get it to work. Hi, So I've been testing out your Lora_v2 branch and while inpainting I've been getting an error about a missing attribute as highlighted below. *** Error running before_process: /content/stable-diffusion-webui/extensions/Stable-Diffusion-WebUI-TensorR/scripts/trt.py Traceback (most recent call last): File "/content/stable-diffusion-webui/modules/scripts.py", line 615, in before_process script.before_process(p, *script_args) File "/content/stable-diffusion-webui/extensions/Stable-Diffusion-WebUI-TensorR/scripts/trt.py", line 129, in before_process if p.enable_hr: AttributeError: 'StableDiffusionProcessingImg2Img' object has no attribute 'enable_hr' I have seen this play out with a couple other extensions previously you may want to consider using something like getattr() to see if the attribute has a value and if it is not set return a default value. You could do somethinng like the following where you previously have accessed it: def before_process(self, p, *args): # 1 # Check divisibilty if p.width % 64 or p.height % 64: gr.Error("Target resolution must be divisible by 64 in both dimensions.") enable_hr = getattr(p, 'enable_hr', False) if enable_hr: hr_w = int(p.width * p.hr_scale) hr_h = int(p.height * p.hr_scale) if hr_w % 64 or hr_h % 64: gr.Error( "HIRES Fix resolution must be divisible by 64 in both dimensions. Please change the upscale factor or disable HIRES Fix." ) That should achieve the same results as you were intending. I pushed my PoC to lora_v2 in case you dare to test it. So to make sure, is lora_v2 basically dev branch? And do you have any tips to not need 30 2GB for every resolution/aspect ratio variable known to man? xD As currently i got 30 different ones as from the gist i got, a more dynamic res one will allow more different resolution, but not as fast. So to make sure, is lora_v2 basically dev branch? lora_v2 has commits on top of the dev branch which allows loading multiple loras and changing lora weights. You have to convert the lora to TensorRT but do not need to specify a resolution. And do you have any tips to not need 30 2GB for every resolution/aspect ratio variable known to man? xD As currently i got 30 different ones as from the gist i got, a more dynamic res one will allow more different resolution, but not as fast. I would create one dynamic if you use multiple resolutions and aspect ratios. Here's an example for SD 1.5 for generations from 512-768 (any aspect ratio) with hires fix up to 2x. I do not know what the optimal field does so I set it to match the max (if anyone has an explanation, please share). It still has a big speed improvement over not using TensorRT. How do I indicate on the TensorRT tab that I want the Lora to make an "ONNX Export only"? Moreover, what's the obvious thing to check if I've: updated to lora_v2 built TRT and ONNX for my preferred model at my preferred resolutions exported Loras from the TensorRT tab selected Automatic in the SD_unet selector in the UI added <lora:X> to the prompt and the Loras are not being applied at inference? Is it that I need to delete the onnx loras? I am getting an error when trying to convert a lora to trt: [W] 'colored' module is not installed, will not use colors when logging. To enable colors, please install the 'colored' module: python3 -m pip install colored [E] ONNX-Runtime is not installed, so constant folding may be suboptimal or not work at all. Consider installing ONNX-Runtime: I:\sd.webui\system\python\python.exe -m pip install onnxruntime [!] Module: 'onnxruntime.tools.symbolic_shape_infer' is required but could not be imported. Note: Error was: No module named 'onnxruntime' You can set POLYGRAPHY_AUTOINSTALL_DEPS=1 in your environment variables to allow Polygraphy to automatically install missing modules. [W] colored module is not installed, will not use colors when logging. To enable colors, please install the colored module: python3 -m pip install colored [W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was: No module named 'onnxruntime' [!] Module: 'onnxruntime.tools.symbolic_shape_infer' is required but could not be imported. Note: Error was: No module named 'onnxruntime' You can set POLYGRAPHY_AUTOINSTALL_DEPS=1 in your environment variables to allow Polygraphy to automatically install missing modules. [W] colored module is not installed, will not use colors when logging. To enable colors, please install the colored module: python3 -m pip install colored [W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was: No module named 'onnxruntime' [!] Module: 'onnxruntime.tools.symbolic_shape_infer' is required but could not be imported. Note: Error was: No module named 'onnxruntime' You can set POLYGRAPHY_AUTOINSTALL_DEPS=1 in your environment variables to allow Polygraphy to automatically install missing modules. Exported to ONNX. Traceback (most recent call last): File "I:\sd.webui\system\python\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "I:\sd.webui\system\python\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "I:\sd.webui\system\python\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "I:\sd.webui\system\python\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "I:\sd.webui\system\python\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "I:\sd.webui\system\python\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, *args) File "I:\sd.webui\system\python\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, **kwargs) File "I:\sd.webui\webui\extensions\Stable-Diffusion-WebUI-TensorRT\ui_trt.py", line 247, in export_lora_to_trt if len(available_trt_unet[base_name]) == 0: KeyError: 'revAnimated_v122' Just a quick question about this log, is installing ONNX-Runtime necessary, a good thing? to omit that error or have faster compile times? When I switched to the Lora_v2 branch and converted it to TensorRT, I encountered the following error while using it. Error running process: J:\x\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py Traceback (most recent call last): File "J:\x\stable-diffusion-webui\modules\scripts.py", line 710, in process script.process(p, *script_args) File "J:\x\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 191, in process self.get_loras(p) File "J:\x\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 238, in get_loras refit_dict = apply_loras(base_path, lora_pathes, lora_scales) File "J:\x\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\lora.py", line 32, in apply_loras add_to_map(refit_dict, name, n.outputs[0].values) AttributeError: 'Variable' object has no attribute 'values' When I switched to the Lora_v2 branch and converted it to TensorRT, I encountered the following error while using it. Error running process: J:\x\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py Traceback (most recent call last): File "J:\x\stable-diffusion-webui\modules\scripts.py", line 710, in process script.process(p, *script_args) File "J:\x\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 191, in process self.get_loras(p) File "J:\x\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\trt.py", line 238, in get_loras refit_dict = apply_loras(base_path, lora_pathes, lora_scales) File "J:\x\stable-diffusion-webui\extensions\Stable-Diffusion-WebUI-TensorRT\scripts\lora.py", line 32, in apply_loras add_to_map(refit_dict, name, n.outputs[0].values) AttributeError: 'Variable' object has no attribute 'values' Hello everybody Do you know if it's normal that it's create 1 single TRT file per lora and per checkpoint ? The size of each trt file generated is 1,6 Go Inside the folder models\Unet-trt Do you know if I can do an optimization ? Because if I have 15 loras and 3 checkpoints. No problem to take time to generate it. But it will take too much space on the hard drive... Thank you getattr(p, 'enable_hr', False) Why the error will happen? I alse came to it. We have an wrong version for the extension? I'm working on improving this. Currently, we require an ONNX model containing the applied weights to refit the engine just in time. And in the short term, I don't see a workaround for this..... For now, there are two options: Use the default LoRA embeddings and export the ONNX model JIT. This takes approx. 40s for SD1.5 model. Extend the current LoRA exporter to support multiple LoRA and strength. I found an issue for the lora on switching engine which one lora based on A model onnx can't fit in B model because of mismatching shape channel. Any solution or advice about it? Hello everybody Do you know if it's normal that it's create 1 single TRT file per lora and per checkpoint ? The size of each trt file generated is 1,6 Go Inside the folder models\Unet-trt Do you know if I can do an optimization ? Because if I have 15 loras and 3 checkpoints. No problem to take time to generate it. But it will take too much space on the hard drive... Thank you perhaps it's normal. If you don't care about the time of inference, perhaps you can cal lora at runtime. tweak Hi guys! Would you mind to share the method to use one lora on different base model as pytorch?
gharchive/issue
2023-10-23T16:54:51
2025-04-01T06:37:17.411471
{ "authors": [ "CoolCuda", "DuckersMcQuack", "FerLuisxd", "Sniper199999", "ThisIsNetsu", "bigmover", "contentis", "devjo", "intoempty", "mort666", "qybing", "szokolai-mate", "worksforme" ], "repo": "NVIDIA/Stable-Diffusion-WebUI-TensorRT", "url": "https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/116", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1964854712
Support for Zephyr 7B model Hi Nvidia Team, Please add the support for Zepyhr 7B model. https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha Thanks for your suggestion. Let me add it to the list of models that were requested and we will keep you posted. Juney Hi @TheCodeWrangler we have not tested Zephyr so cannot comment. If you do, definitely let us know! I am hoping to understand what makes a new arch. work/not work with tensorRT-LLM to compile .engine files. TensorRT-LLM is invariant to hyper-parameters, but not architectural changes. ie, if you want to change the number of layers or heads, that's fine, but if you want to replace GQA with SWA or layernorm with RMSnorm, that requires code changes. You can check out the args of the models you're interested in to see which hyperparams are exposed. I have heard that the architecture of Zephyr is very similar to LLama. Does tensorRT-LLM not work currently on Zephyr? I am hoping to understand what makes a new arch. work/not work with tensorRT-LLM to compile .engine files. Have you tested it? I have tested a trained Zephyr 7b model extensively today, with no success at all. I've used an ensemble repository, debugged to see the in- and outputs, however, the generated TensorRT engine doesn't seem to respond when fed the input tokens from the preprocessor/tokenizer. This I have tested with 2 way Tensor parallelism + 2 way pipeline parallelism, 2 way Tensor parallelism and no parallelism at all, all running on a DGX. The Engine does actually respond when querying it with the run.py script, which does strike me as odd, so I may have an issue with the actual model setup, so further investigation may be required. After another round of testing today, the same engine suddenly does seem to work, to the point where I can't replicate the empty response anymore... So I guess there is at least some level of compatibility? As more and more new models enter the market, we have prepared comprehensive instructions for TRT-LLM developers on adapting to new models of interest. We encourage our community developers to expand the range of supported models, fostering an open ecosystem with rapid iterations. Please try following these instructions and let us know if you encounter any issues during the adaptation process. We greatly appreciate your dedication. @laikhtewari
gharchive/issue
2023-10-27T06:29:46
2025-04-01T06:37:17.419633
{ "authors": [ "AdamzNV", "KatarinaMah", "juney-nvidia", "ncomly-nvidia", "nv-guomingz", "rishabh279", "teis-e" ], "repo": "NVIDIA/TensorRT-LLM", "url": "https://github.com/NVIDIA/TensorRT-LLM/issues/157", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2160523182
Incorrect error message when shape is not suitable for fp8 casting Version: latest stable import torch import transformer_engine.pytorch as te from transformer_engine.common import recipe # Set dimensions. in_features = 767 out_features = 3072 hidden_size = 2048 # Initialize model and inputs. model = te.Linear(in_features, out_features, bias=True) inp = torch.randn(hidden_size, in_features, device="cuda") # Create an FP8 recipe. Note: All input args are optional. fp8_recipe = recipe.DelayedScaling( margin=0, interval=1, fp8_format=recipe.Format.E4M3) # Enable autocasting for the forward pass with te.fp8_autocast(enabled=True, fp8_recipe=fp8_recipe): out = model(inp) loss = out.sum() loss.backward() The error message is AssertionError: Tensor dimensions are not compatible for FP8 execution: (2048 % 8 != 0, 767 % 16 != 0) But it is obvious that 2048 % 8 == 0. https://github.com/NVIDIA/TransformerEngine/blob/b8eea8aaa94bb566c3a12384eda064bda8ac4fd7/transformer_engine/pytorch/utils.py#L216-L224 This function should be improved, as the comment says. 767 % 16 != 0. We should clarify that both conditions are required.
gharchive/issue
2024-02-29T06:45:13
2025-04-01T06:37:17.459839
{ "authors": [ "lucifer1004", "timmoon10" ], "repo": "NVIDIA/TransformerEngine", "url": "https://github.com/NVIDIA/TransformerEngine/issues/688", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2234925383
[PyTorch][duplicate][CI] FP8 cuda graphs This PR adds the following features (high-level): make_graphed_callables API similar to the PyTorch API with some additional arguments for FP8 usage. Support for fp8 weight caching via existing is_first_microbatchargument is also retained. Restructuring and amax reduction logic with a simpler design and handling of various parallelisms with minimal book-keeping compared to the previous approach. Forward and backward amaxes are reduced within the scope of current iteration, solving numerous bugs w.r.t. checkpointing and removing the need to save global buffers. Support for nested/multiple FP8 autocast contexts with different recipes and distributed groups. Amax reductions are module independent and happen at at autocast level. This also resolves numerous bugs and allows for support for MoE/LoRA like models. Redesign of transposes for Float8Tensor that makes the transposes persistent for graph capture. Also fixes use cases for the vanilla optimizers (non fp8-distopt). The scaling inverses for weight tensors are no longer frozen when caching weights across microbatches. /te-ci pytorch
gharchive/pull-request
2024-04-10T07:19:57
2025-04-01T06:37:17.463276
{ "authors": [ "ksivaman" ], "repo": "NVIDIA/TransformerEngine", "url": "https://github.com/NVIDIA/TransformerEngine/pull/766", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2143287308
Support use of external/system Catch2 installation I am building cudnn-frontend v1.1.0 in a restricted environment that is unable to use the FetchContent mechanism. I already have an existing build of Catch2 that should be consumed in its stead. Elaborating the upper part of samples/CMakeLists.txt as follows will make this possible: find_package(Catch2 QUIET) if(NOT Catch2_FOUND) Include(FetchContent) # Fetch and build catch2 FetchContent_Declare( Catch2 GIT_REPOSITORY https://github.com/catchorg/Catch2.git GIT_TAG v3.3.2 ) FetchContent_MakeAvailable(Catch2) endif() (I would submit this as a PR, but I see that PRs are not being accepted.) I am then able to specify -DCatch2_ROOT=/path/to/catch2-install to CMake, and the samples build makes use of the existing library without issue. Hi @iskunk , Thanks for reporting this. Sure, we can add this to our upcoming release. Thanks for the contribution. Aside, we are also planning to open our FE repository to accept smaller PR such as these in near future. Will keep this issue open till the release is made. Thanks, Anerudhan obo cudnn team.
gharchive/issue
2024-02-19T23:35:11
2025-04-01T06:37:17.488640
{ "authors": [ "Anerudhan", "iskunk" ], "repo": "NVIDIA/cudnn-frontend", "url": "https://github.com/NVIDIA/cudnn-frontend/issues/63", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1593877677
Getting Metric not enabled on DCP metric Hello, I installed the help chart for the DCGM metrics in kubernetes, and I wanted to use the DCP metrics since we have MIG running on several A100s in our cluster. However, we received the following error: time="2023-02-21T17:32:09Z" level=warning msg="Skipping line 5 ('DCGM_FI_PROF_GR_ENGINE_ACTIVE'): metric not enabled" time="2023-02-21T17:32:09Z" level=warning msg="Skipping line 6 ('DCGM_FI_PROF_PIPE_TENSOR_ACTIVE'): metric not enabled" from the dcgm pod log. Any help in solving this would be greatly appreciated! My "default-counters.csv" file that I use is shown below: # FormatTEST # If line starts with a '#' it is considered a comment # DCGM FIELD, Prometheus metric type, help message # Clocks #DCGM_FI_DEV_SM_CLOCK, gauge, SM clock frequency (in MHz). #DCGM_FI_DEV_MEM_CLOCK, gauge, Memory clock frequency (in MHz). # Temperature #DCGM_FI_DEV_MEMORY_TEMP, gauge, Memory temperature (in C). #DCGM_FI_DEV_GPU_TEMP, gauge, GPU temperature (in C). # Power #DCGM_FI_DEV_POWER_USAGE, gauge, Power draw (in W). #DCGM_FI_DEV_TOTAL_ENERGY_CONSUMPTION, counter, Total energy consumption since boot (in mJ). # PCIE #DCGM_FI_DEV_PCIE_TX_THROUGHPUT, counter, Total number of bytes transmitted through PCIe TX (in KB) via NVML. #DCGM_FI_DEV_PCIE_RX_THROUGHPUT, counter, Total number of bytes received through PCIe RX (in KB) via NVML. #DCGM_FI_DEV_PCIE_REPLAY_COUNTER, counter, Total number of PCIe retries. # Utilization (the sample period varies depending on the product) #DCGM_FI_DEV_GPU_UTIL, gauge, GPU utilization (in %). #DCGM_FI_DEV_MEM_COPY_UTIL, gauge, Memory utilization (in %). #DCGM_FI_DEV_ENC_UTIL, gauge, Encoder utilization (in %). #DCGM_FI_DEV_DEC_UTIL , gauge, Decoder utilization (in %). # Errors and violations # DCGM_FI_DEV_XID_ERRORS, gauge, Value of the last XID error encountered. # DCGM_FI_DEV_POWER_VIOLATION, counter, Throttling duration due to power constraints (in us). # DCGM_FI_DEV_THERMAL_VIOLATION, counter, Throttling duration due to thermal constraints (in us). # DCGM_FI_DEV_SYNC_BOOST_VIOLATION, counter, Throttling duration due to sync-boost constraints (in us). # DCGM_FI_DEV_BOARD_LIMIT_VIOLATION, counter, Throttling duration due to board limit constraints (in us). # DCGM_FI_DEV_LOW_UTIL_VIOLATION, counter, Throttling duration due to low utilization (in us). # DCGM_FI_DEV_RELIABILITY_VIOLATION, counter, Throttling duration due to reliability constraints (in us). # Memory usage DCGM_FI_DEV_FB_FREE, gauge, Framebuffer memory free (in MiB). DCGM_FI_DEV_FB_USED, gauge, Framebuffer memory used (in MiB). # ECC # DCGM_FI_DEV_ECC_SBE_VOL_TOTAL, counter, Total number of single-bit volatile ECC errors. # DCGM_FI_DEV_ECC_DBE_VOL_TOTAL, counter, Total number of double-bit volatile ECC errors. # DCGM_FI_DEV_ECC_SBE_AGG_TOTAL, counter, Total number of single-bit persistent ECC errors. # DCGM_FI_DEV_ECC_DBE_AGG_TOTAL, counter, Total number of double-bit persistent ECC errors. # Retired pages # DCGM_FI_DEV_RETIRED_SBE, counter, Total number of retired pages due to single-bit errors. # DCGM_FI_DEV_RETIRED_DBE, counter, Total number of retired pages due to double-bit errors. # DCGM_FI_DEV_RETIRED_PENDING, counter, Total number of pages pending retirement. # NVLink # DCGM_FI_DEV_NVLINK_CRC_FLIT_ERROR_COUNT_TOTAL, counter, Total number of NVLink flow-control CRC errors. # DCGM_FI_DEV_NVLINK_CRC_DATA_ERROR_COUNT_TOTAL, counter, Total number of NVLink data CRC errors. # DCGM_FI_DEV_NVLINK_REPLAY_ERROR_COUNT_TOTAL, counter, Total number of NVLink retries. # DCGM_FI_DEV_NVLINK_RECOVERY_ERROR_COUNT_TOTAL, counter, Total number of NVLink recovery errors. # DCGM_FI_DEV_NVLINK_BANDWIDTH_TOTAL, counter, Total number of NVLink bandwidth counters for all lanes # VGPU License status #DCGM_FI_DEV_VGPU_LICENSE_STATUS, gauge, vGPU License status # Remapped rows #DCGM_FI_DEV_UNCORRECTABLE_REMAPPED_ROWS, counter, Number of remapped rows for uncorrectable errors #DCGM_FI_DEV_CORRECTABLE_REMAPPED_ROWS, counter, Number of remapped rows for correctable errors #DCGM_FI_DEV_ROW_REMAP_FAILURE, gauge, Whether remapping of rows has failed # Static configuration information. These appear as labels on the other metrics DCGM_FI_DRIVER_VERSION, label, Driver Version # DCGM_FI_NVML_VERSION, label, NVML Version # DCGM_FI_DEV_BRAND, label, Device Brand # DCGM_FI_DEV_SERIAL, label, Device Serial Number # DCGM_FI_DEV_OEM_INFOROM_VER, label, OEM inforom version # DCGM_FI_DEV_ECC_INFOROM_VER, label, ECC inforom version # DCGM_FI_DEV_POWER_INFOROM_VER, label, Power management object inforom version # DCGM_FI_DEV_INFOROM_IMAGE_VER, label, Inforom image version # DCGM_FI_DEV_VBIOS_VERSION, label, VBIOS version of the device DCGM_FI_PROF_GR_ENGINE_ACTIVE, gauge, Ratio of time the graphics engine is active (in %). DCGM_FI_PROF_PIPE_TENSOR_ACTIVE, gauge, Ratio of cycles the tensor (HMMA) pipe is active (in %). @avickars, Could you provide more details about your environment? Is the dcgm-exporter run in a container? What are the command-line arguments? Is it possible to run a standalone nv-hostengine with debug logs and configure the dcgm-exporter to connect to it?
gharchive/issue
2023-02-21T17:47:26
2025-04-01T06:37:17.492961
{ "authors": [ "avickars", "nikkon-dev" ], "repo": "NVIDIA/dcgm-exporter", "url": "https://github.com/NVIDIA/dcgm-exporter/issues/139", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
790293258
Successfully built but fail when testing Hi, I am able to build and install the package using deb package without any errors. However when I run testing, it failed with sanity check (please see bellow). To check if the CUDA Toolkit is functional, I run some samples test in CUDA package and they work perfectly. Are there any suggests for my case? Appreciate a lot for your help! Below is output when I have the tests run after installing the package: quanmai@rhea:~/tools/gdrcopy$ sanity Running suite(s): Sanity Assertion "(gdr_pin_buffer(g, d_ptr, size, 0, 0, &mh)) == (0)" failed at sanity.cpp:187 CUDA error: CUDA_ERROR_INVALID_DEVICE Assertion "CUDA_SUCCESS == result" failed at sanity.cpp:214 Assertion "(gdr_pin_buffer(g, d_A, A_size, 0, 0, &A_mh)) == (0)" failed at sanity.cpp:279 Assertion "(gdr_pin_buffer(pt->g, pt->d_buf, pt->size, 0, 0, &pt->mh)) == (0)" failed at sanity.cpp:1512 Assertion "(gdr_pin_buffer(g, d_ptr, size, 0, 0, &mh)) == (0)" failed at sanity.cpp:362 Assertion "(gdr_pin_buffer(g, d_ptr, size, 0, 0, &mh)) == (0)" failed at sanity.cpp:473 Assertion "(gdr_pin_buffer(g, d_ptr, size, 0, 0, &mh)) == (0)" failed at sanity.cpp:548 Assertion "(gdr_pin_buffer(g, d_ptr, size, 0, 0, &mh[i])) == (0)" failed at sanity.cpp:632 Assertion "(gdr_pin_buffer(g, d_ptr, size, 0, 0, &mh)) == (0)" failed at sanity.cpp:772 Assertion "(read(read_fd, &cont, sizeof(int))) == (sizeof(int))" failed at sanity.cpp:733 Assertion "(gdr_pin_buffer(g, d_ptr, size, 0, 0, &mh)) == (0)" failed at sanity.cpp:880 Assertion "(gdr_pin_buffer(g, d_ptr, size, 0, 0, &mh)) == (0)" failed at sanity.cpp:1019 Assertion "(gdr_pin_buffer(g, d_ptr, size, 0, 0, &mh)) == (0)" failed at sanity.cpp:1131 Assertion "(gdr_pin_buffer(g, d_ptr, size, 0, 0, &mh)) == (0)" failed at sanity.cpp:1131 Assertion "(gdr_pin_buffer(g, d_ptr, size, 0, 0, &mh)) == (0)" failed at sanity.cpp:1379 CUDA error: CUDA_ERROR_INVALID_DEVICE Assertion "CUDA_SUCCESS == result" failed at sanity.cpp:1479 Assertion "(read(read_fd, &d_A, sizeof(CUdeviceptr))) == (sizeof(CUdeviceptr))" failed at sanity.cpp:1458 6%: Checks: 15, Failures: 14, Errors: 0 sanity.cpp:196:F:Basic:basic:0: Failed sanity.cpp:230:F:Basic:basic_with_tokens:0: Failed sanity.cpp:330:F:Basic:basic_unaligned_mapping:0: Failed sanity.cpp:1604:F:Basic:basic_child_thread_pins_buffer:0: Failed sanity.cpp:430:F:Data Validation:data_validation:0: Failed sanity.cpp:505:F:Invalidation:invalidation_access_after_gdr_close:0: Failed sanity.cpp:584:F:Invalidation:invalidation_access_after_cumemfree:0: Failed sanity.cpp:672:F:Invalidation:invalidation_two_mappings:0: Failed sanity.cpp:833:F:Invalidation:invalidation_fork_access_after_cumemfree:0: Failed sanity.cpp:978:F:Invalidation:invalidation_fork_after_gdr_map:0: Failed sanity.cpp:1050:F:Invalidation:invalidation_fork_child_gdr_map_parent:0: Failed sanity.cpp:1178:F:Invalidation:invalidation_fork_map_and_free:0: Failed sanity.cpp:1402:F:Invalidation:invalidation_unix_sock_shared_fd_gdr_map:0: Failed sanity.cpp:1495:F:Invalidation:invalidation_fork_child_gdr_pin_parent_with_tokens:0: Failed quanmai@rhea:~/tools/gdrcopy$ copybw GPU id:0; name: TITAN X (Pascal); Bus id: 0000:17:00 GPU id:1; name: Quadro P400; Bus id: 0000:b3:00 selecting device 0 testing size: 131072 rounded size: 131072 device ptr: 7fa892400000 closing gdrdrv quanmai@rhea:~/tools/gdrcopy$ copylat GPU id:0; name: TITAN X (Pascal); Bus id: 0000:17:00 GPU id:1; name: Quadro P400; Bus id: 0000:b3:00 selecting device 0 device ptr: 0x7f440e400000 allocated size: 16777216 closing gdrdrv Hi @quan-maithanh, The GPU you used does not support GPUDirect RDMA. Please see https://github.com/NVIDIA/gdrcopy#requirements. Hi @quan-maithanh, The GPU you used does not support GPUDirect RDMA. Please see https://github.com/NVIDIA/gdrcopy#requirements. Hi @pakmarkthub Thanks for your reply, appreciate it a lot! I did think that my TITAN X GPU has GPUDirect RDMA support as its architecture (Pascal) is in the support list? Hi @pakmarkthub Thanks for your reply, appreciate it a lot! I did think that my TITAN X GPU has GPUDirect RDMA support as its architecture (Pascal) is in the support list? You need to run it with a Quadro or Tesla class GPU. Can you try running it with your Quadro P400? You can set export CUDA_VISIBLE_DEVICES=1 and then rerun the test. You need to run it with a Quadro or Tesla class GPU. Can you try running it with your Quadro P400? You can set export CUDA_VISIBLE_DEVICES=1 and then rerun the test. Yes, the test works perfectly with the Quadro one, but with the GTX it is unfortunately not supported yet. Thanks for your help! I am going to close this topic in a couple of hours. Yes, the test works perfectly with the Quadro one, but with the GTX it is unfortunately not supported yet. Thanks for your help! I am going to close this topic in a couple of hours.
gharchive/issue
2021-01-20T20:21:02
2025-04-01T06:37:17.509598
{ "authors": [ "pakmarkthub", "quan-maithanh" ], "repo": "NVIDIA/gdrcopy", "url": "https://github.com/NVIDIA/gdrcopy/issues/164", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2715036374
Add simlayerkv Add SimlayerKV as proposed in this issue #19 Regarding the compression rate, I wanted to highlight how the original implementation chose thresholds dynamically based on the model type. here is how the original implementation handled it if 'llama3' in out_path: threshold = 0.9 elif 'llama2' in out_path: threshold = 0.65 elif 'mistral' in out_path: threshold = 0.8 elif 'qwen' in out_path: threshold = 0.85 I’ve also added a notebook demonstrating how to generate outputs using SimLayerKV. All tests have passed successfully. Let me know if there’s anything else you’d like refined or if additional details are needed! Hi @dame-cell, thanks a lot for your work ! Could you please merge your branch with main ? I will read the paper and review your work. I already added a few minor comments. After quickly reading the paper, I think your current implementation does not implement SimLayerKV. Here is my understanding of the paper. SimLayerKV dynamically identify what it calls "lazy layers". For lazy layers, it prunes the KV cache using the StreamingLLM strategy, i.e. keep only first and last tokens. For non-lazy layers, the full KV cache it kept. To identify lazy layers, a procedure similar to SnapKV is used: compute the average attention weights for the last 32 tokens. Then average these attention weights on the first tokens and the 1024 recent token. The layer is lazy if this average exceeds a threshold, which is model dependent. Your current implementation is the following: scores = torch.zeros(bsz, num_heads, seq_len, device=device, dtype=dtype) if is_lazy_layer: # if layer is lazy , only keep the initial and recent tokens and the rest is set to 0 scores[:, :, : self.initial_tokens] = 1.0 scores[:, :, -self.recent_tokens :] = 1.0 else: scores[:, :, :] = 1.0 return scores What will happen with this: If the layer is not lazy, scores are 1. Thus for a given compression ratio, the top-k values (torch.topk in the forward hook of the BasePress) will be somehow random (torch.ones(10).topk(3) returns 8, 6, 7 for instance) if the layer is lazy, then depending on the lenght of the prompt initial and recent tokens will be kept indeed, but then random tokens will be pruned This method would require to use a custom Cache (maybe similar to HybridCache in Gemma ?). The compression ratio is also ill-defined for this press but it's an issue with kvpress currently that maybe focuses to much on this (we are thinking about a refacto, see #21). SimLayerKV also looks similar to the DuoAttention paper which is doing this at the head level (would required specific attention kernel as proposed in #7). Hi @dame-cell, prior to the refacto I mentioned (see #21), I'd like to investigate the best way to integrate SimLayerKVPress. Let me investigate it based on your contributions. I might open another PR if the code is very different. @dame-cell I will close this PR as I believe the code I shared in #28 is closer to what is proposed in the original paper. Please tell me if you disagree
gharchive/pull-request
2024-12-03T13:28:05
2025-04-01T06:37:17.535690
{ "authors": [ "SimJeg", "dame-cell" ], "repo": "NVIDIA/kvpress", "url": "https://github.com/NVIDIA/kvpress/pull/22", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1428037600
[submodule-sync] bot-submodule-sync-branch-22.12 to branch-22.12 [skip ci] [bot] submodule-sync to create a PR keeping thirdparty/cudf up-to-date. HEAD commit SHA: 25ad9f4926ea793b94ea17f91a2e602b85ec1911, cudf commit SHA: https://github.com/rapidsai/cudf/commit/06031670a3bf3a6a715721cb208ea6ea619cab04 This PR will be auto-merged if test passed. If failed, it will remain open until test pass or manually fix. HEAD commit SHA: 25ad9f4926ea793b94ea17f91a2e602b85ec1911, CUDF commit SHA: https://github.com/rapidsai/cudf/commit/06031670a3bf3a6a715721cb208ea6ea619cab04 Test passed: True SUCCESS - auto-merge
gharchive/pull-request
2022-10-29T03:00:01
2025-04-01T06:37:17.569537
{ "authors": [ "nvauto" ], "repo": "NVIDIA/spark-rapids-jni", "url": "https://github.com/NVIDIA/spark-rapids-jni/pull/688", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2301619475
Use ErrorClass to Throw AnalysisException [databricks] In Spark 4.0.0 the constructor of AnalysisException that takes a String parameter has been protected. To incorporate this change, we are introducing a shim class that will throw the respective errorClass contributes to #9259 @jlowe I think I have addressed all your concerns I have addressed all your concerns I think PTAL. The only thing that you may have questions about is that I have introduced the RapidsAnalysisException which will be thrown even in older versions of Spark instead of Raw AnalysisException build I have addressed your comments I think. Renamed the method in TrampolineUtil.throwAnalysisException() to TrampolineUtil.throwRapidsAnalysisException Throwing the ErrorMsg.PARTITION_DYN_STA_ORDER.getMsg in dynamicPartitionParentError instead of the errorClass Removed the RapidsErrorUtilsFor340PlusShims build
gharchive/pull-request
2024-05-17T01:04:57
2025-04-01T06:37:17.573248
{ "authors": [ "razajafri" ], "repo": "NVIDIA/spark-rapids", "url": "https://github.com/NVIDIA/spark-rapids/pull/10830", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2444308286
Move easy unshimmed classes to sql-plugin-api Contributes to #11208 build
gharchive/pull-request
2024-08-02T07:43:56
2025-04-01T06:37:17.574657
{ "authors": [ "gerashegalov" ], "repo": "NVIDIA/spark-rapids", "url": "https://github.com/NVIDIA/spark-rapids/pull/11288", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
667924573
Implement optimized AQE support so that exchanges run on GPU where possible This PR implements optimized AQE support for Spark 3.0.1 and 3.1.0, where shuffle and broadcast exchanges stay on the GPU where supported. build build build This PR also closes https://github.com/NVIDIA/spark-rapids/issues/492 build @tgravescs I think I've address all feedback so far. Please take another look when you can. Thanks @abellina I think I addressed all your feedback now. build Most recent commit runs Python TPCH tests with AQE on and off and this closes Closes https://github.com/NVIDIA/spark-rapids/issues/275 build build build I’m not seeing anything big stand out, just a couple of nits above. Thanks @andygrove build build build @tgravescs @abellina I believe all issues are addressed. I have also enabled integration tests for TPC-H Q2 with AQE now that the GpuFilter fix has been merged. build build
gharchive/pull-request
2020-07-29T15:14:12
2025-04-01T06:37:17.580219
{ "authors": [ "abellina", "andygrove" ], "repo": "NVIDIA/spark-rapids", "url": "https://github.com/NVIDIA/spark-rapids/pull/462", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
450734987
After steps from README.md for getting audio: ModuleNotFoundError: No module named 'librosa' $ python3 inference.py -f <(ls mel_spectrograms/*.pt) -w waveglow_old.pt -o . --is_fp16 -s 0.6 Traceback (most recent call last): File "inference.py", line 30, in <module> from mel2samp import files_to_list, MAX_WAV_VALUE File "/home/vitaly_zdanevich/waveglow/mel2samp.py", line 38, in <module> from tacotron2.layers import TacotronSTFT File "/home/vitaly_zdanevich/waveglow/tacotron2/layers.py", line 2, in <module> from librosa.filters import mel as librosa_mel_fn ModuleNotFoundError: No module named 'librosa' I tried to install it with the same result: $ pip3 install librosa Requirement already satisfied: librosa in /home/vitaly_zdanevich/.local/lib/python3.5/site-packages Requirement already satisfied: decorator>=3.0.0 in /usr/local/lib/python3.5/dist-packages (from librosa) Requirement already satisfied: numpy>=1.8.0 in /home/vitaly_zdanevich/.local/lib/python3.5/site-packages (from librosa) Requirement already satisfied: joblib>=0.7.0 in /usr/local/lib/python3.5/dist-packages (from librosa) Requirement already satisfied: scikit-learn!=0.19.0,>=0.14.0 in /usr/local/lib/python3.5/site-packages (from librosa) Requirement already satisfied: six>=1.3 in /usr/local/lib/python3.5/dist-packages (from librosa) Requirement already satisfied: resampy>=0.2.0 in /home/vitaly_zdanevich/.local/lib/python3.5/site-packages (from librosa) Requirement already satisfied: audioread>=2.0.0 in /home/vitaly_zdanevich/.local/lib/python3.5/site-packages (from librosa) Requirement already satisfied: scipy>=0.14.0 in /home/vitaly_zdanevich/.local/lib/python3.5/site-packages (from librosa) Requirement already satisfied: intel-scipy in /usr/local/lib/python3.5/dist-packages (from scikit-learn!=0.19.0,>=0.14.0->librosa) Requirement already satisfied: pydaal in /usr/local/lib/python3.5/dist-packages (from scikit-learn!=0.19.0,>=0.14.0->librosa) Requirement already satisfied: numba>=0.32 in /home/vitaly_zdanevich/.local/lib/python3.5/site-packages (from resampy>=0.2.0->librosa) Requirement already satisfied: intel-numpy in /usr/local/lib/python3.5/dist-packages (from intel-scipy->scikit-learn!=0.19.0,>=0.14.0->librosa) Requirement already satisfied: tbb4py==2019.* in /usr/local/lib/python3.5/dist-packages (from pydaal->scikit-learn!=0.19.0,>=0.14.0->librosa) Requirement already satisfied: daal==2019.* in /usr/local/lib/python3.5/dist-packages (from pydaal->scikit-learn!=0.19.0,>=0.14.0->librosa) Requirement already satisfied: llvmlite>=0.29.0 in /home/vitaly_zdanevich/.local/lib/python3.5/site-packages (from numba>=0.32->resampy>=0.2.0->librosa) Requirement already satisfied: mkl-fft in /usr/local/lib/python3.5/dist-packages (from intel-numpy->intel-scipy->scikit-learn!=0.19.0,>=0.14.0->librosa) Requirement already satisfied: mkl-random in /usr/local/lib/python3.5/dist-packages (from intel-numpy->intel-scipy->scikit-learn!=0.19.0,>=0.14.0->librosa) Requirement already satisfied: mkl in /usr/local/lib/python3.5/dist-packages (from intel-numpy->intel-scipy->scikit-learn!=0.19.0,>=0.14.0->librosa) Requirement already satisfied: icc-rt in /usr/local/lib/python3.5/dist-packages (from intel-numpy->intel-scipy->scikit-learn!=0.19.0,>=0.14.0->librosa) Requirement already satisfied: tbb==2019.* in /usr/local/lib/python3.5/dist-packages (from tbb4py==2019.*->pydaal->scikit-learn!=0.19.0,>=0.14.0->librosa) Requirement already satisfied: intel-openmp in /usr/local/lib/python3.5/dist-packages (from mkl->intel-numpy->intel-scipy->scikit-learn!=0.19.0,>=0.14.0->librosa) This is probably related to your setup, not the repo.
gharchive/issue
2019-05-31T10:42:07
2025-04-01T06:37:17.583022
{ "authors": [ "rafaelvalle", "vitaly-zdanevich" ], "repo": "NVIDIA/waveglow", "url": "https://github.com/NVIDIA/waveglow/issues/125", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2036535066
Initial faq Add a faq.html page for frequently asked questions. Just 2 initial questions in it so far, but more could be added. It's not yet linked from anywhere, but could easily be linked from the experiment page. I'm good with the current state of this PR, and I'm moving it to ready for review. We could merge it whenever.
gharchive/pull-request
2023-12-11T21:12:51
2025-04-01T06:37:17.584170
{ "authors": [ "jspjutNV" ], "repo": "NVlabs/FPSLatencyDemo", "url": "https://github.com/NVlabs/FPSLatencyDemo/pull/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
741859277
identify client based on census_variable currently we assume that census_variable are from the same source (main table, profile, subject, decennial & etc), however it's not true. in the download function, we would need to identify the clients needed to request all variables make requests using each clients merge the dataframes into one unified output refactor target: def download(self, geoquery: list, client, v: Variable) or def download_variable(self, geoquery: list, client, v: Variable) self.client_options = { "D": self.c.acs5dp, "S": self.c.acs5st, "P": self.c.sf1, "B": self.c.acs5, } def source_dict(census_vars): sources = {'D':[],'S':[],'P':[],'B':[],} for v in census_vars: result[v[0]].append(v) return sources def download(self, geoquery: list, v: Variable) -> pd.DataFrame: """ this function works in conjunction with download_variable, and is only created to facilitate multiprocessing """ # Create Variables E_variables, M_variables = self.create_census_variables(v) sources = source_dict(E_variables + M_variables) dfs = [] for source, var_list in sources.items(): client = self.client_options.get(source, self.c.acs5) dfs.append( pd.DataFrame( client.get( ("NAME", ",".join(var_list)), geoquery, year=self.year ) ) ) df = pd.concat(dfs) # If E is an outlier, then set M as Nan for i in v.census_variable: df.loc[df[f"{i}E"].isin(self.outliers), f"{i}M"] = np.nan # Replace all outliers as Nan df = df.replace(self.outliers, np.nan) return df
gharchive/issue
2020-11-12T19:16:30
2025-04-01T06:37:17.603163
{ "authors": [ "SPTKL", "mgraber" ], "repo": "NYCPlanning/db-factfinder", "url": "https://github.com/NYCPlanning/db-factfinder/issues/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1158744063
Release v0.25.11 Updates Updates the Logo component to include new variants for BPL, Clever, LPA, MLN, QPL, Schomburg, SimplyE and Treasures. Updates font size to "12px" and top margin to "4px" for HelperErrorText component. Updates font size to "14px" for TextInput component. Adds an aria-label attribute to the Notification component to use with its aside HTML landmark element. Added an "Accessibility" section in the Notification Storybook page to note that this component should not be used within a header or footer HTML landmark element. Updates the Notification component to handle link color inside the content area, better styling for centering and the dismissible variation, and updated background color for the "Announcement" and "Warning" types. Updates a log message in the Icon component to be more descriptive. Updates the mobile styles for the image in the StructuredContent component. Updates the prop type for the "Definition" List type so DOM elements can be passed in the definition. Fixes Updates the bottom margin of the Select in the SearchBar so that the helper text has standard gap between the main form components and itself. Updates how TabList and TabPanels are returned in the Tabs component so no false log messages are consoled. Updates List component styling for inline. Tugboat has finished building the preview for this pull request! Link: https://pr886-ypmaxps1pywdkmfri2jfxtybon2eikvj.tugboat.qa Dashboard: https://dashboard.tugboat.qa/6221058701c6d20b85f095eb
gharchive/pull-request
2022-03-03T18:14:29
2025-04-01T06:37:17.616122
{ "authors": [ "EdwinGuzman", "bigfishdesign13" ], "repo": "NYPL/nypl-design-system", "url": "https://github.com/NYPL/nypl-design-system/pull/886", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1226976149
DSD-949 - Heading add noSpace prop Fixes JIRA ticket DSD-949 This PR does the following: add a noSpace prop to the Heading component where if set, it sets margin-bottom to 0 How has this been tested? Accessibility concerns or updates Checklist: [ ] I have updated the Storybook documentation accordingly. [ ] I have added relevant accessibility documentation for this pull request. [ ] All new and existing tests passed. Front End Review: [ ] View the example in Storybook The usage of passing noSpace to the useStyleConfig function is the right approach. As it currently stands, passing this variable to that function does nothing because it's not set up in heading.ts. You'll find that in heading.ts, the function is declared as const Heading = { baseStyle: { // This is to help target custom anchor elements // passed as children to the Heading component. a: baseLinkStyles, }, This needs to be updated as const Heading = { baseStyle: ({ noSpacing }) =>{ // This is to help target custom anchor elements // passed as children to the Heading component. a: baseLinkStyles, }, and then you can add the mb conditional logic. This is so that we can keep the logic for styles in the them file rather than in the component file. One minor nitpick is to use marginBottom instead of mb because not everyone might be use to the Chakra-style shorthand. I prefer the clarity over the brevity. What seemed to work was removing "marginBottom" from the "margins" object on the top in order for it to be recognized in baseStyle function This is good to go but the merge conflicts should be fixed. Tugboat has finished building the preview for this pull request! Link: https://pr988-o1drzhobwiiuqaft8ukh0y6ekdlszrwi.tugboat.qa Dashboard: https://dashboard.tugboat.qa/62749b502e91387b1710f523 Tugboat has finished building the preview for this pull request! Link: https://pr988-o1drzhobwiiuqaft8ukh0y6ekdlszrwi.tugboat.qa Dashboard: https://dashboard.tugboat.qa/62749b502e91387b1710f523 Tugboat has finished building the preview for this pull request! Link: https://pr988-o1drzhobwiiuqaft8ukh0y6ekdlszrwi.tugboat.qa Dashboard: https://dashboard.tugboat.qa/62749b502e91387b1710f523
gharchive/pull-request
2022-05-05T17:29:02
2025-04-01T06:37:17.626572
{ "authors": [ "EdwinGuzman", "knezmilos" ], "repo": "NYPL/nypl-design-system", "url": "https://github.com/NYPL/nypl-design-system/pull/988", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
368324331
Enhance /deaths, /levels and /timeline timeout by caching call to get member Bot searches all entries to get first 100/200 visible entries for the user that executed the command. Every entry would then execute a call into get_member, and this would take some time, which in turn could make the bot timeout while waiting. By introducing the cache, the same members will not be searched repeatedly and the performance is increased a lot. PR modified to also add caching to /levels and /timeline, which suffered from the same bottleneck. Just wanted to add that, other than those minor details, this is a very good solution. Perfect, thanks!
gharchive/pull-request
2018-10-09T17:48:42
2025-04-01T06:37:17.635310
{ "authors": [ "Galarzaa90", "Tschis" ], "repo": "NabDev/NabBot", "url": "https://github.com/NabDev/NabBot/pull/135", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2547007209
Font size should probably scale with UI Scale It might not make sense to scale the font 1:1 with scale as 300% scale would be a massive size 42 but perhaps at 50-75% of scale? The text is comically small at scales >200% and comically large for scales <80% that it completely obscures the image/icon behind the text. I believe Alt1 requires a whole number so maybe something like replacing the hardcoded 14 with Math.floor(14 * (scaleFactor * .6)) 75% of scaleFactor seems like an appropriate amount across all ranges using ceil() instead of floor() particularly so that the font is 1px larger for scale ranging between 50-100% where 1px is a lot more drastic of a difference than at 100%+ scale where 1px difference isn't as noticeable. Math.ceil(14 * (gaugeData.scaleFactor * 0.75))
gharchive/issue
2024-09-25T05:56:29
2025-04-01T06:37:17.643336
{ "authors": [ "NadyaNayme" ], "repo": "NadyaNayme/job-gauges", "url": "https://github.com/NadyaNayme/job-gauges/issues/64", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1923582102
🛑 Damien is down In de13eb6, Damien (https://damien-doussaud.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Damien is back up in b93e487 after 8 minutes.
gharchive/issue
2023-10-03T08:20:36
2025-04-01T06:37:17.651576
{ "authors": [ "Namide" ], "repo": "Namide/upptime", "url": "https://github.com/Namide/upptime/issues/440", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2499036260
[bug] Breaks server discovery Started to get reports that people can't reach my Jellyfin instance and after some investigating it looks like this server causes the web client to lose track of where the server is. When users go to the domain for the instance they are greeted with No servers have been found using the automatic server discovery [Add server] Users can click the button and type the same base URL as they're visiting in to access the server again but this isn't intuitive or intended behavior. After digging around in my Nginx config and getting nowhere I tried disabling the last plugin I installed (this one) and everything instantly went back to normal. Hey, thanks for reporting this issue. Could you tell me on which clients this error appeared on? (Firefox, Chrome, Windows Client, ...) I've been getting this as well, it was happening to me on Firefox but I didn't test other browsers Actually I'm not sure if this is an issue with 10.9.10 or with this plugin, I found the same issue in the main Jellyfin repo here: https://github.com/jellyfin/jellyfin/issues/12523 Actually I'm not sure if this is an issue with 10.9.10 or with this plugin, I found the same issue in the main Jellyfin repo here: jellyfin/jellyfin#12523 I was the one who opened jellyfin/jellyfin#12523, and can confirm the issue was caused by the plugin, not Jellyfin 10.9.10 itself. Would love to see this resolved however, as I quite like the plugin! On a somewhat unrelated note - are there any plans to integrate the plugin into Jellyfin itself? As the functionality you provide is great :) @Namo2 I encountered this issue on all browsers I tested (Firefox & Edge on both Windows and macOS, Safari & Chrome on macOS). There is some more information including browser console logs provided by another user in the issue mentioned above That this error occurs on 10.9.10 is great information. I will try to fix it this weekend. Since the JMP version is not needed anymore I canceled my plans to integrate it to jellyfin but who knows what the future will bring. The possibility is still there of course but my focus will be on the bugs which are currently still open Since the JMP version is not needed anymore I canceled my plans to integrate it to jellyfin but who knows what the future will bring. The possibility is still there of course but my focus will be on the bugs which are currently still open Jellyfin Media Player (desktop app?) I'd love to see it for jellyfin-web anyways - I'm expecting some of the features of the web client to be implemented in the native apps too (e.g. trickplay preview, skipping segments in the future). Sorry for the thread unrelated to this issue - thanks for giving us this plugin, in any case! <3 Since the JMP version is not needed anymore I canceled my plans to integrate it to jellyfin but who knows what the future will bring. The possibility is still there of course but my focus will be on the bugs which are currently still open Jellyfin Media Player (desktop app?) I'd love to see it for jellyfin-web anyways - I'm expecting some of the features of the web client to be implemented in the native apps too (e.g. trickplay preview, skipping segments in the future). Sorry for the thread unrelated to this issue - thanks for giving us this plugin, in any case! <3 Hey the issue should be fixed now. Feel free to try it out and tell me if it works. Btw fyi, the trickplay preview is in the nativ desktop client
gharchive/issue
2024-08-31T20:27:43
2025-04-01T06:37:17.659651
{ "authors": [ "LeviSnoot", "Namo2", "alvitali", "loof2736" ], "repo": "Namo2/InPlayerEpisodePreview", "url": "https://github.com/Namo2/InPlayerEpisodePreview/issues/27", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
736338307
Error while trying to download from bookmarked members (/bookmark_new_illust.php) Prerequisites [X] Did you read FAQ section? [X] Did you test with the latest releases or commit ? Description When I try to download new illusts from bookmarked members (option 8) I get the following output: Processing New Illust from bookmark Page #1 Source URL: https://www.pixiv.net/bookmark_new_illust.php?p=1 Error at process_new_illust_from_bookmark(): (<class 'AttributeError'>, AttributeError("'NoneType' object has no attribute 'findAll'"), <traceback object at 0x1D168800>) Traceback (most recent call last): File "PixivUtil2.py", line 1604, in main np_is_valid, op_is_valid, selection = main_loop(ewd, op_is_valid, selection, np_is_valid, args) File "PixivUtil2.py", line 1320, in main_loop menu_download_new_illust_from_bookmark(op_is_valid, args) File "PixivUtil2.py", line 921, in menu_download_new_illust_from_bookmark process_new_illust_from_bookmark(page_num, end_page_num, r18) File "PixivUtil2.py", line 419, in process_new_illust_from_bookmark pb = PixivNewIllustBookmark(parsed_page) File "l:\PixivUtil2Python\PixivBookmark.py", line 102, in __init__ self.__ParseNewIllustBookmark(page) File "l:\PixivUtil2Python\PixivBookmark.py", line 120, in __ParseNewIllustBookmark result = page.find(attrs={'class': '_image-items autopagerize_page_element'}).findAll('a') AttributeError: 'NoneType' object has no attribute 'findAll' press enter to exit. Downloading by member_id and image_id seem to be working fine however. can you upload the html page from the given url in the logging? it should try to find js-mount-point-latest-following in script tag. Strangely, after changing my password & updating the cookie string back to the old cookie format used by pixiv, everything started working normally again.
gharchive/issue
2020-11-04T18:38:02
2025-04-01T06:37:17.663466
{ "authors": [ "Nandaka", "jacobpewit" ], "repo": "Nandaka/PixivUtil2", "url": "https://github.com/Nandaka/PixivUtil2/issues/859", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
1125053077
[Tasks 1540, 1541] Add Snapshots and basic rendering tests Add basic tests, snapshots for noFeedbackBoardsView, editableText, and actionItemDisplay components Add test:watch command to allow faster test validation Add initial testing documentation to CONTRIBUTING.MD` @msa2984, Have no idea why, but the files I changed aren't recognizing the "mocks" file change, which is why this PR created today, instead of yesterday, so I added them directly to file. Wondering if you'd have better luck diagnosing/if you even see the issue when referencing the mocks. ➕ ➕ ➕ for updating the CONTRIBUTING doc to add testing info! Some notes for posterity before I merge this into the bigger PR: I swapped your tests over to Enzyme, this way we can remove the React-Test-Renderer dependency and all tests use the same framework. I re-generated the snapshots. It seems like the snapshots using Enzyme vs React-Test-Renderer are slightly different, however from comparing them they are both correct, just a slightly different way of rendering. I moved the shared mock calls into a setupTests.tsx file, this seems to be a much cleaner way of initializing the SDK + API mocks which are used by all of the more complex components. I added a reference to enzyme-to-json, as this seems like the recommended serializer for snapshot testing with enzyme. @msa2984, Have no idea why, but the files I changed aren't recognizing the "mocks" file change, which is why this PR created today, instead of yesterday, so I added them directly to file. Wondering if you'd have better luck diagnosing/if you even see the issue when referencing the mocks. I'm wondering if you were seeing an issue similar to what I was seeing - occasionally when I would add a new mock to that folder and reference it in my test, I would get various "Could not reference X of undefined" errors. Usually restarting VS Code would resolve the issue though, so I always figured that it was just a weird file detection error. Moving the shared API + SDK mocks into the setupTests.tsx seems to have resolved the failures I was seeing with your tests when I first checked out the branch.
gharchive/pull-request
2022-02-05T23:12:57
2025-04-01T06:37:17.711620
{ "authors": [ "focusthenflex", "msa2984" ], "repo": "NarmathaBala/vsts-extension-retrospectives", "url": "https://github.com/NarmathaBala/vsts-extension-retrospectives/pull/40", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2019310917
🛑 Staff Desk BINUS SCHOOL Serpong is down In 0957dc8, Staff Desk BINUS SCHOOL Serpong (https://serpong.binus-school.net/staffdesk/LoginBinusian.aspx) was down: HTTP code: 0 Response time: 0 ms Resolved: Staff Desk BINUS SCHOOL Serpong is back up in 1bec183 after 14 minutes.
gharchive/issue
2023-11-30T18:52:58
2025-04-01T06:37:17.730773
{ "authors": [ "NatanaelGeraldoS" ], "repo": "NatanaelGeraldoS/BINUSWeb", "url": "https://github.com/NatanaelGeraldoS/BINUSWeb/issues/217", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2311817407
Suggestions from Discord Keeping these in one place: nitrofenix: i'd base it on banks but maybe with half the amount of slots per tier trunksbomb: 8 buckets/16/64/256/1024/1000000 Tiers/capacities from the old mod (for reference/discussion only): 1: 3 tanks of 4000 mB each 2: 6 tanks of 16000 mB each 3: 9 tanks of 64000 mB each 4: 12 tanks of 256000 mB each 5: 15 tanks of 1024000 mB each 6: 18 tanks of 4096000 mB each 7: 27 tanks of 247483647 mB each Huh, would you look at that - I was pretty spot on for capacity :) Additional suggestions: Lockable slots Automatic pickup of fluids when standing in them? Probably not a good idea use the tank itself as the bucket (no dedicated bucket item like entangled buckets) shift + click air to change modes, shift + scroll while in "build" mode to change which liquid you're selecting definitely a "void" mode to void excess fluids maybe a sponge mode, or a mode that functions like a hose pulley from Create so you can target one block of fluid and have it suck up other blocks in the same body of water shift+click bucket of fluid in inventory while Tank GUI is open automatically inserts the fluid from that bucket, like Modern Industrialization machines Done so far: tank 1-7 and dock can interact with buckets/similar in gui including shift click can interact with pipes in world using dock can lock slots
gharchive/issue
2024-05-23T02:40:41
2025-04-01T06:37:17.735022
{ "authors": [ "Natanaelel", "nitrofenix", "trunksbomb" ], "repo": "Natanaelel/TankStorage", "url": "https://github.com/Natanaelel/TankStorage/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
687559003
Point to custom Sqlite3 for paid extension support Is it possible to make nativescript-sqite use a custom bundled Sqlite3 instead of the default system one? We have a requirement to enable database encryption from the app (by using PRAGMA key = 'pwd'). For this to work we need to use a custom sqlite3.c file that has the SEE plugin. Initially we thought the paid nativescript-sqlite supports encryption but it turns out it can only decrypt? This plugin supports both encryption and decryption. You can already use PRAGMA key='...' PRAGMA key ='somekey' to set encryption on a non encrypted database does not work. If a database is already encrypted, I could decrypt with options:key and then add a new key with PRAGMA rekey = ''; Yes, you have to create a new database with encryption (which you can do with a create database command or ship a encrypted db). You are correct it won't encrypt a un-encrypted database, however you can work around by just create a new database and copy all data from the old database and then delete the old un-encrypted version. However, for security; if you are wanting to have encrypted db's it is better to start with a encrypted database rather than a unencrypted database. Thanks for confirming. What you are saying, is indeed one of my options: Ship an encrypted database with the app to begin with --> decrypt with your plugin --> rekey as needed. I would have to explore the other approach you suggested aka create a new encrypted database and then copy over stuff from the old unencrypted one. Although, I don't want to put all this script in the app code. So most likely I won't go with this option. Still its good to know that your plugin can create a new db with encryption turned on. Finally, just out of continued curiosity, is it possible to make use of a custom sqlite3 library in your plugin? The module.modulemap under platform/ios seems to specify the library. I was wondering if I could do something there to pick up my custom sqlite3.c? Well, we could build you a custom version with your sqlite3.c file; it is basically just building a new aar (on android) and framework on ios to be linked in and using them as the native part. The rest of the code base (like the JS) would work with no other changes; so it is really just creating new native libraries using he SEE version of sqlite instead of SqlCipher. Got it. Thats exactly what I was looking for. Infact, we have got around this use case by using open source SqlCipher to initially encrypt the database during the build phase and then use your paid plugin to decrypt and rekey as needed. So I am good now. Its still good to know that you can package a custom sqlite version, if needed. Thank you!
gharchive/issue
2020-08-27T21:37:12
2025-04-01T06:37:17.778585
{ "authors": [ "NathanaelA", "kaustavlahiri" ], "repo": "NathanaelA/nativescript-sqlite", "url": "https://github.com/NathanaelA/nativescript-sqlite/issues/162", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
688572451
Update getAppResourcesPath() to use the new configs @NathanaelA we need to update these for 7.0 to read from the new config file. I think we should probably create a new CLI command that the gradle script can invoke and read the output - because the configs are no longer simple json files. As a quick fix, we could read the value with some regex (though that'd be easy to break in the configs - since it's js/ts). The current implementation could be kept as a fallback. https://github.com/NativeScript/android-runtime/blob/b10b6d625807f0d65dfebe2649cc37cfb124dd6e/test-app/gradle-helpers/paths.gradle#L19-L37 https://github.com/NativeScript/android-runtime/blob/f71130432d1fed102776bb8372906aadda82d3fe/test-app/app/build.gradle#L137-L157 @rigor789 - How about the CLI create the nsconfig.json file if the nsconfig.js file version of it exists; then no other tooling needs to change at this point, and it is then fully backwards compatible with older runtimes. Frequently we have people try using older runtimes to see when a issue has occurred; so breaking the CLI's compatibility should actually be fixed first... The goal and we are 99% there is to get rid of redundant configs, package.jsons, special keys etc. What I think we can/should do to keep older runtimes working, while not forcing an nsconfig.json on newer runtimes is to only generate the nsconfig.json when the runtime version is less than 7.0.0 and for 7 and above, we can add the new logic for reading the config value through the CLI.
gharchive/issue
2020-08-29T17:26:04
2025-04-01T06:37:17.846748
{ "authors": [ "NathanaelA", "rigor789" ], "repo": "NativeScript/android-runtime", "url": "https://github.com/NativeScript/android-runtime/issues/1634", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
206141696
sample code to implement NativeScript Class in TypeScript with self delegate From @xmlking on February 7, 2017 18:3 I am new to NativeScript, looking for sample code to implement NativeScript Class in TypeScript with self delegate? here are more details : http://stackoverflow.com/questions/41991160/how-to-implement-nativescript-class-in-typescript-with-self-delegate want to simplify my code my eliminating additional delegate class by directly implementing delegate methods in the main class that supposed to have a delegate. Thanks Copied from original issue: NativeScript/NativeScript#3603 @xmlking As the doc says: There should be no TypeScript constructor, because it will not be executed. Instead override one of the init methods. You can override a base init method and place your delegate setting logic there: public init() { var self = this.super.init(); if (self) { self.delegate = self; } return self; } @ivanbuhov thanks for you input. my basic issue is I cannot inject Angular Service that extends NSObject with following case I am getting runtime error, either with JS constructor() or init() methods : file:///app/tns_modules/@angular/compiler/bundles/compiler.umd.js:18632:28: JS ERROR Error: Invalid provider for the NgModule 'MicrosoftBandModule' - only instances of Provider and Type are allowed, got: [?MicrosoftBandService?] import {Injectable} from '@angular/core'; @Injectable() export class MicrosoftBandService extends NSObject implements ConnectionDelegate { public static ObjCProtocols = [ConnectionDelegate]; ... } without extends NSObject the angular service will be registered properly. but native delegates has to always extend NSObject. Need to research how to register angular service provider that extend NSObject I tried following method, but this time I am getting undefined error :( CONSOLE ERROR file:///app/tns_modules/tns-core-modules/trace/trace.js:160:30: ns-renderer: Error in :0:0 caused by: undefined is not an object (evaluating 'this.msband.connection$.subscribe') export const MicrosoftBandToken = new OpaqueToken('MicrosoftBandToken'); @Injectable() export class MicrosoftBandService extends NSObject implements ConnectionDelegate { public static ObjCProtocols = [ConnectionDelegate]; static new(): MicrosoftBandService { let self = <MicrosoftBandService>super.new() self.mbk = MicrosoftBand.alloc().init(); if (self) { self.mbk.connectDelegate = self; } return self; } ... } import { MicrosoftBandToken, MicrosoftBandService } from './src/app/services/microsoftband.service.ios'; export class MicrosoftBandModule { static forRoot(): ModuleWithProviders { return { ngModule: MicrosoftBandModule, providers: [ {provide: MicrosoftBandToken, useValue: MicrosoftBandService.new()}, ] }; } } import {MicrosoftBandToken, MicrosoftBandService} from '@xmlking/nativescript-ngx-microsoftband'; @Component({ selector: 'app', templateUrl: "app.component.html" }) export class AppComponent implements OnInit, OnDestroy { constructor(private zone: NgZone, @Inject(MicrosoftBandToken) private msband: MicrosoftBandService) { } ... } @xmlking It seems that an angular service can't derive from native object. Try not to register your MicrosoftBandService as angular service just to verify that the original problem - implementing NativeScript Class in TypeScript with self delegate is solved. Then you can create a separate class for your angular service.
gharchive/issue
2017-02-08T09:35:09
2025-04-01T06:37:17.853799
{ "authors": [ "NickIliev", "ivanbuhov", "xmlking" ], "repo": "NativeScript/ios-runtime", "url": "https://github.com/NativeScript/ios-runtime/issues/725", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
724770036
fix: Cannot reattach ActivatedRouteSnapshot created from a different … …route PR Checklist [x] The PR title follows our guidelines: https://github.com/NativeScript/NativeScript/blob/master/CONTRIBUTING.md#commit-messages. [x] There is an issue for the bug/feature this PR is for. To avoid wasting your time, it's best to open a suggestion issue first and wait for approval before working on it. [x] You have signed the CLA. [ ] All existing tests are passing: https://github.com/NativeScript/nativescript-angular/blob/master/DevelopmentWorkflow.md#running-the-tests [ ] Tests for the changes are included. What is the current behavior? Navigating between sibling routes may sometimes mess with the cache due to it using the key of a parent route, instead of the complete route. What is the new behavior? This now uses the correct key from the farthest children of the route PARTIALLY Fixes #1993. (this issue has 2 problems on it: 1 is the one fixed in this PR, the other is that route guard/resolvers don't work on back navigation and may break navigation for a PRO) Please don't merge this yet, as it seems it might not fully solve the issue @edusperoni @NathanaelA When debugging this issue in the use case Root => A => B, then back to A it looks like all keys to store/retrieve are generated correctly (also without this fix), but the error is still thrown. The expression that causes the error to be thrown (curr.value.routeConfig !== result.value.routeConfig in https://github.com/angular/angular/blob/3817e5f1dfeb9de26fa2ea4068a37565f435214f/packages/router/src/create_router_state.ts#L52) will actually show equal properties and values when inspecting, but are not the same object reference. This only occurs with lazy loaded modules (if changed to "regular" route configs it works). Perhaps the solution direction for this part of the issue is to either use a "deepEqual" instead of reference equality or to somehow make sure that the routeConfig reference stays the same for lazy loaded modules. Hope this helps, cheers.
gharchive/pull-request
2020-10-19T16:14:43
2025-04-01T06:37:17.860326
{ "authors": [ "edusperoni", "jcarolus" ], "repo": "NativeScript/nativescript-angular", "url": "https://github.com/NativeScript/nativescript-angular/pull/2279", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
334394273
ZoneAwareError on with angular and iOS simulator I am using the plugin in a location.provider.ts whose saveUserLocation method (see code below) is called from my "home" page. Everything works find on Android and a real iOS device, however on the iOS simulator it most of the times gives me this error below and sometimes it just gives me the expected result (the location object). {"line":994,"column":38,"sourceURL":"file:///app/tns_modules/nativescript-angular/zone-js/dist/zone-nativescript.js","originalStack":"ZoneAwareError@file:///app/tns_modules/nativescript-angular/zone-js/dist/zone-nativescript.js:994:38\nfile:///app/tns_modules/nativescript-geolocation/geolocation.js:160:41\ninvoke@file:///app/tns_modules/tns-core-modules/timer/timer.js:54:53\nonInvoke@file:///app/tns_modules/@angular/core/bundles/core.umd.js:4787:39\nrunGuarded@file:///app/tns_modules/nativescript-angular/zone-js/dist/zone-nativescript.js:138:53\ntick@file:///app/tns_modules/tns-core-modules/timer/timer.js:19:26\nUIApplicationMain@[native code]\nstart@file:///app/tns_modules/tns-core-modules/application/application.js:258:26\nbootstrapApp@file:///app/tns_modules/nativescript-angular/platform-common.js:86:28\nbootstrapModule@file:///app/tns_modules/nativescript-angular/platform-common.js:72:26\nanonymous@file:///app/main.j Any idea as to what may cause this? I don't care too much if this happens only on the simulator, I am a little worried it might also happen on a real device even though I have not seen it yet. Thanks! Which platform(s) does your issue occur on? iOS simulator v10 Please, provide the following version numbers that your issue occurs with: $ tns info ✔ Getting NativeScript components versions information... ⚠ Update available for component nativescript. Your current version is 4.1.0 and the latest available version is 4.1.1. ⚠ Update available for component tns-core-modules. Your current version is 3.4.1 and the latest available version is 4.1.0. ⚠ Update available for component tns-android. Your current version is 3.4.2 and the latest available version is 4.1.3. ⚠ Update available for component tns-ios. Your current version is 3.4.1 and the latest available version is 4.1.1. { "description": "MyApp", "license": "SEE LICENSE IN <your-license-filename>", "readme": "NativeScript Application", "repository": "<fill-your-repository-here>", "nativescript": { "id": "com.abhayastudios.testapp", "tns-ios": { "version": "3.4.1" }, "tns-android": { "version": "3.4.2" } }, "dependencies": { "@angular/animations": "~5.2.0", "@angular/common": "~5.2.0", "@angular/compiler": "~5.2.0", "@angular/core": "~5.2.0", "@angular/forms": "~5.2.0", "@angular/http": "~5.2.0", "@angular/platform-browser": "~5.2.0", "@angular/platform-browser-dynamic": "~5.2.0", "@angular/router": "~5.2.0", "@teammaestro/nativescript-svg": "^1.0.1", "moment": "^2.18.1", "nativescript-angular": "~5.2.0", "nativescript-bitmap-factory": "^1.7.1", "nativescript-feedback": "^1.1.2", "nativescript-geolocation": "^4.2.6", "nativescript-gradient": "^2.0.1", "nativescript-image-swipe": "^2.1.0", "nativescript-imagepicker": "^5.0.0", "nativescript-iqkeyboardmanager": "^1.2.0", "nativescript-ngx-slides": "^1.0.0", "nativescript-platform-css": "^1.6.5", "nativescript-plugin-firebase": "^5.3.1", "nativescript-social-share": "^1.5.0", "nativescript-theme-core": "~1.0.4", "nativescript-ui-listview": "^3.5.0", "nativescript-webview-interface": "^1.4.1", "reflect-metadata": "~0.1.8", "rxjs": "~5.5.2", "tns-core-modules": "~3.4.1", "zone.js": "^0.8.16" }, "devDependencies": { "babel-traverse": "6.26.0", "babel-types": "6.26.0", "babylon": "6.18.0", "lazy": "1.0.11", "nativescript-dev-typescript": "~0.6.0", "typescript": "~2.6.2" } Is there any code involved? This the relevant part of my location.provider.ts: /* First obtain permissions to use location services if needed, then try to obtain user's location */ public saveUserLocation() { return new Promise((resolve, reject) => { geolocation.isEnabled().then((isEnabled:boolean) => { /* already have location permissions */ if (isEnabled) { this.getCurrentLocation().then((location) => { resolve(location); }, (error) => { reject(error); }); } else { /* first request location permissions */ geolocation.enableLocationRequest().then(() => { /* granted location permissions, now get location */ this.getCurrentLocation().then((location) => { resolve(location); }, (error) => { reject(error); }); }, (error) => { console.error(`Failed to obtain location permissions: ${JSON.stringify(error)}`); reject(error); }); } }) }); } /* Try to obtain user's location, assumes permissions are handled and save to user session if successful Returns promise containing location */ private getCurrentLocation() { return new Promise((resolve, reject) => { geolocation.getCurrentLocation({ desiredAccuracy: Accuracy.any, iosAllowsBackgroundLocationUpdates: false, maximumAge: 5000, // in milliseconds timeout: 5000, // in milliseconds }) .then((location) => { resolve(location); }, (error) => { console.error(`Failed to obtain user location: ${JSON.stringify(error)}`); reject(error); }); }); } I think the error means a timeout was exceeded while searching for a location (i.e. no GPS lock was obtained on the device). In your case the simulator did not return a location. You should be able to handle this in your code by increasing the timeout or prompting the user to retry when a GPS lock is available. You can also play with the iOS simulator settings and enable location from the Debug menu. @lini thanks for your reply. I have one of the Apple locations set in the simulator (which every now and then is obtained properly). Even increasing the timeout to 30k ms still results in the error above sometimes. I tried using the demo app from this repository and the debug location feature of the simulator. I clicked Enable Location then Start Monitoring and the updates on screen were consistent with the selected debug location: Can you try the demo app and see if there is a problem there for you? @lini I have same problem. I changed locations like you, but still not working. @lini Using the demo app I get the error below on the iOS simulator (with location set) when tapping once the Get Current Location button triggering the getCurrentLocation method. Yet at the same time the Start Monitoring button which triggers the watchLocation method displays locations just fine! app/main-page.js:57:20: Error: Timeout while searching for location! After changing the promise reject callback to log (error.message || error) instead of the entire object, indeed I get the same timeout message as with the demo app. Still, somehow there seems to be an issue with the getCurrentLocation on the iOS simulator. I will try to upgrade to the latest Xcode and see it the issue is solved. Still happening after upgrading to latest Xcode (now 9.4.1). Anyway, as long as this happens only on the simulator, it is not that important to me. This is still an issue, did you guys figure it out yet?
gharchive/issue
2018-06-21T08:42:42
2025-04-01T06:37:17.877453
{ "authors": [ "abhayastudios", "aosi87", "aturan23", "lini" ], "repo": "NativeScript/nativescript-geolocation", "url": "https://github.com/NativeScript/nativescript-geolocation/issues/139", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
45761636
Hard-coded keyboard shortcuts in help text The keyboard shortcuts in the help text are hard-coded. This duplicates the information provided elsewhere in the code, making it possible for the help text and the application to get out of sync. #102 means that nearly all shortcuts are now documented in menus. Most of the text in Help, About is out of date. Suggest we remove Help, About and consider proper help pages in the future, if the need arises.
gharchive/issue
2014-10-14T15:09:03
2025-04-01T06:37:17.884689
{ "authors": [ "quicklizard99" ], "repo": "NaturalHistoryMuseum/inselect", "url": "https://github.com/NaturalHistoryMuseum/inselect/issues/94", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
930669191
natural.DiceCoefficient does not compute the Dice coefficient Your algorithm count all bigram duplicates, whereas the Dice coefficient only count distinct bigrams. Please have a look at Talisman, NLTK or the algorithms in many languages in https://en.wikibooks.org/wiki/Algorithm_Implementation/Strings/Dice's_coefficient for a proper implementation. As an example, when comparing two Wikipedia articles with a Dice coefficient of 0.76, your algorithm outputs 0.42 (The two articles are the raw wikitext of https://en.wikipedia.org/wiki/French_Revolution and https://en.wikipedia.org/wiki/Influence_of_the_French_Revolution, you have to click on the Edit tab to get the wikitext). Thanks for your feedback. I created a new version of the Dice coefficient module based on the algorithms site you mentioned. I added support for strings of length 1 and made it case insensitive. Like in the original version.
gharchive/issue
2021-06-26T10:40:31
2025-04-01T06:37:17.891070
{ "authors": [ "Hugo-ter-Doest", "vibl" ], "repo": "NaturalNode/natural", "url": "https://github.com/NaturalNode/natural/issues/603", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1238973967
IPV6 Does this CDN allow for ipv6? IPv6 is on ToDo on booth branches, so no. But should be possible to add.
gharchive/issue
2022-05-17T17:51:02
2025-04-01T06:37:17.907622
{ "authors": [ "Ne00n", "devramsean0" ], "repo": "Ne00n/woodCDN", "url": "https://github.com/Ne00n/woodCDN/issues/12", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1384051349
Now you can set LLVM_CONFIG when you run build.sh As the title says. What I needed was something like this to work: LLVM_CONFIG=/usr/bin/llvm-config-14 ./build.sh hooray
gharchive/pull-request
2022-09-23T16:41:03
2025-04-01T06:37:17.911259
{ "authors": [ "FeepingCreature", "dejlek" ], "repo": "Neat-Lang/neat", "url": "https://github.com/Neat-Lang/neat/pull/13", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
224606670
Add Heroku Deploy button Docs: https://devcenter.heroku.com/articles/heroku-button#creating-the-app-json-file @AndrewDryga I need public annon.api deployment.
gharchive/issue
2017-04-26T21:36:07
2025-04-01T06:37:17.912790
{ "authors": [ "alexeybondarenko" ], "repo": "Nebo15/annon.web", "url": "https://github.com/Nebo15/annon.web/issues/47", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
223275435
New persist man... diff was too aggresive. json.go is a completely new file, jsonold.go is the old json.go, and if you want to read the code in a more sane way, you should just view it as a standalone file. Still writing more comprehensive tests, but I wanted to see how the checksum parser worked on Windows. I'm wondering if a different approach to newlines will screw up my byte-parsing. This PR is intended to address the file corruption error that has been occasionally wiping out users files after a hard shutdown. This new method has more safety around it, and notably does not use os.Rename at all to preserve safety.
gharchive/pull-request
2017-04-21T04:50:11
2025-04-01T06:37:17.923910
{ "authors": [ "DavidVorick" ], "repo": "NebulousLabs/Sia", "url": "https://github.com/NebulousLabs/Sia/pull/1734", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2003024265
🛑 Becpl Angular UI is down In 7ae33e4, Becpl Angular UI ($BECPL_ANGULAR_URL) was down: HTTP code: 0 Response time: 0 ms Resolved: Becpl Angular UI is back up in def27be after 16 minutes.
gharchive/issue
2023-11-20T21:19:55
2025-04-01T06:37:17.947845
{ "authors": [ "NehalDamania" ], "repo": "NehalDamania/becpl-uptime", "url": "https://github.com/NehalDamania/becpl-uptime/issues/2210", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2246004419
🛑 Becpl Angular UI is down In d8c0759, Becpl Angular UI ($BECPL_ANGULAR_URL) was down: HTTP code: 0 Response time: 0 ms Resolved: Becpl Angular UI is back up in 1eaf144 after 17 minutes.
gharchive/issue
2024-04-16T13:04:27
2025-04-01T06:37:17.950142
{ "authors": [ "NehalDamania" ], "repo": "NehalDamania/becpl-uptime", "url": "https://github.com/NehalDamania/becpl-uptime/issues/8248", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1599763352
Create License yaiy creating my pull request hmmmm nice
gharchive/pull-request
2023-02-25T16:46:19
2025-04-01T06:37:17.950848
{ "authors": [ "Neiware" ], "repo": "Neiware/ssproyecto", "url": "https://github.com/Neiware/ssproyecto/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
63108907
Strong consistency support for ets "Strong consistency only support the leveldb backend currently." Do you have a plan to support ets backend in some future? Hi, Right now it does not make any sense, since the strong consistency backend (riak_ensemble) goes to disk for the tree.
gharchive/issue
2015-03-19T22:49:54
2025-04-01T06:37:17.959394
{ "authors": [ "kalta", "liveforeverx" ], "repo": "Nekso/nkbase", "url": "https://github.com/Nekso/nkbase/issues/2", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
180251179
Task: Update readme with new emoji's that have been added. Some emoji's were fixed and some were added. Update the readme with the ones that were added from #44 @VagishVela it looks like the readme was updated as part of that PR already https://github.com/Neo11/ChitChat/pull/44/files -- Am I missing something? @donofriov Whoops my bad.
gharchive/issue
2016-09-30T08:44:21
2025-04-01T06:37:17.967743
{ "authors": [ "VagishVela", "donofriov" ], "repo": "Neo11/ChitChat", "url": "https://github.com/Neo11/ChitChat/issues/48", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2264433737
Can't open multiple Neogit tabs in a window Description I want to open multiple Neogit tabs (https://neovim.io/doc/user/tabpage.html) in 1 Neovim window but every time I run the command :tab Neogit cwd=<git dir>, the new git directory is opened in the current tab. Therefore, I can run only 1 instance of Neogit in 1 window. Neovim version NVIM v0.9.4 Build type: Release LuaJIT 2.1.0-beta3 Operating system and version macOS 13.6.4 Steps to reproduce On terminal run nvim On nvim run :Neogit cwd=<git-dir-1> On nvim run :tab Neogit cwd=<git-dir-2> Expected behavior There should be 2 Neogit tabs pointing to respective git directories. Actual behavior On nvim run :Neogit cwd=<git-dir-1>: The current tab shows Neogit instance pointing to git-dir-1 On nvim run :tab Neogit cwd=<git-dir-2>: A new blank second tab is opened The first tab now points to git-dir-2 No tab points to git-dir-1 Minimal config -- NOTE: See the end of this file if you are reporting an issue, etc. Ignore all the "scary" functions up top, those are -- used for setup and other operations. local M = {} local base_root_path = vim.fn.fnamemodify(debug.getinfo(1, "S").source:sub(2), ":p:h") .. "/.min" function M.root(path) return base_root_path .. "/" .. (path or "") end function M.load_plugin(plugin_name, plugin_url) local package_root = M.root("plugins/") local install_destination = package_root .. plugin_name vim.opt.runtimepath:append(install_destination) if not vim.loop.fs_stat(package_root) then vim.fn.mkdir(package_root, "p") end if not vim.loop.fs_stat(install_destination) then print(string.format("> Downloading plugin '%s' to '%s'", plugin_name, install_destination)) vim.fn.system({ "git", "clone", "--depth=1", plugin_url, install_destination, }) if vim.v.shell_error > 0 then error(string.format("> Failed to clone plugin: '%s' in '%s'!", plugin_name, install_destination), vim.log.levels.ERROR) end end end ---@alias PluginName string The plugin name, will be used as part of the git clone destination ---@alias PluginUrl string The git url at which a plugin is located, can be a path. See https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protocols for details ---@alias MinPlugins table<PluginName, PluginUrl> ---Do the initial setup. Downloads plugins, ensures the minimal init does not pollute the filesystem by keeping ---everything self contained to the CWD of the minimal init file. Run prior to running tests, reproducing issues, etc. ---@param plugins? table<PluginName, PluginUrl> function M.setup(plugins) vim.opt.packpath = {} -- Empty the package path so we use only the plugins specified vim.opt.runtimepath:append(M.root(".min")) -- Ensure the runtime detects the root min dir -- Install required plugins if plugins ~= nil then for plugin_name, plugin_url in pairs(plugins) do M.load_plugin(plugin_name, plugin_url) end end vim.env.XDG_CONFIG_HOME = M.root("xdg/config") vim.env.XDG_DATA_HOME = M.root("xdg/data") vim.env.XDG_STATE_HOME = M.root("xdg/state") vim.env.XDG_CACHE_HOME = M.root("xdg/cache") -- NOTE: Cleanup the xdg cache on exit so new runs of the minimal init doesn't share any previous state, e.g. shada vim.api.nvim_create_autocmd("VimLeave", { callback = function() vim.fn.system({ "rm", "-r", "-f", M.root("xdg") }) end }) end -- NOTE: If you have additional plugins you need to install to reproduce your issue, include them in the plugins -- table within the setup call below. M.setup({ plenary = "https://github.com/nvim-lua/plenary.nvim.git", telescope = "https://github.com/nvim-telescope/telescope.nvim", diffview = "https://github.com/sindrets/diffview.nvim", neogit = "https://github.com/NeogitOrg/neogit" }) -- WARN: Do all plugin setup, test runs, reproductions, etc. AFTER calling setup with a list of plugins! -- Basically, do all that stuff AFTER this line. require("neogit").setup({}) -- For instance, setup Neogit Not a bug - that's how it works on master. However, I've improved the experience a lot on the nightly branch, which will be released with nvim 0.10. Better CWD handling, that sort of thing. Try it out :) @CKolkey did try that myself, i have kind=tab, seems like if you already have an opened neogit status in a tab, if you call Neogit cwd=... again the neogit tab is not focused (but if you manually go to the tab it seems the state is updated). Tried on nightly with .setup() call. Is there a way allowing us to have multiple tabs opened for git status each for unique cwd ? I have a similar use case as the OP, where i might have multiple files from multiple working directories opened in the current nvim window. So instead of updaing the current neogit tab status if it exists, it will open a new tab for the cwd, if a tab for that cwd exists, then it can be focused and updated (working similarly to what is intended atm) - could be an opt in. Not sure how it would work with kind=split, but this could be only applicable for kind=tab wdyt ? It's going to require a bit of doing.. I use vim.uv.getcwd() all over to find the correct status buffer instance and repository object. I'm beginning to think that relying on that was a mistake in the first place as it's effectively a global variable. It's no issue to run git against a specific directory with the -C flag, but what I need to do is start passing the CWD into pretty much every constructor function, so it can either be consumed (run a git command) or passed on to a buffer that would consume it. Not a huge change, I think, but a bit tedious. However, it would mean that tabs/splits/whatever could point to a project directory and all coexist.
gharchive/issue
2024-04-25T20:26:52
2025-04-01T06:37:17.992970
{ "authors": [ "CKolkey", "EarthyOrange", "asmodeus812" ], "repo": "NeogitOrg/neogit", "url": "https://github.com/NeogitOrg/neogit/issues/1260", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2324111289
Error "Neogit has not been setup!" Description After adding the plugin to Lazy and using the :Neogit command, it gives the error: "Neogit has not been setup!" P.S. I'm new to neovim Neovim version NVIM v0.9.1 Build type: Release LuaJIT 2.1.0-beta3 Operating system and version macOs 13.1 (Ventura) m1 Steps to reproduce add the code to lua/plugins/neogit.lua: ` return { { "NeogitOrg/neogit", dependencies = { "nvim-lua/plenary.nvim", -- required "sindrets/diffview.nvim", -- optional - Diff integration -- Only one of these is needed, not both. "nvim-telescope/telescope.nvim", -- optional -- "ibhagwan/fzf-lua", -- optional }, cmd = "Neogit", config = true, }, } ` Go to a new terminal window Open a project Run the command :Neogit Expected behavior No response Actual behavior The error "Neogit has not been setup!" is displayed. Minimal config -- NOTE: See the end of this file if you are reporting an issue, etc. Ignore all the "scary" functions up top, those are -- used for setup and other operations. local M = {} local base_root_path = vim.fn.fnamemodify(debug.getinfo(1, "S").source:sub(2), ":p:h") .. "/.min" function M.root(path) return base_root_path .. "/" .. (path or "") end function M.load_plugin(plugin_name, plugin_url) local package_root = M.root("plugins/") local install_destination = package_root .. plugin_name vim.opt.runtimepath:append(install_destination) if not vim.loop.fs_stat(package_root) then vim.fn.mkdir(package_root, "p") end if not vim.loop.fs_stat(install_destination) then print(string.format("> Downloading plugin '%s' to '%s'", plugin_name, install_destination)) vim.fn.system({ "git", "clone", "--depth=1", plugin_url, install_destination, }) if vim.v.shell_error > 0 then error(string.format("> Failed to clone plugin: '%s' in '%s'!", plugin_name, install_destination), vim.log.levels.ERROR) end end end ---@alias PluginName string The plugin name, will be used as part of the git clone destination ---@alias PluginUrl string The git url at which a plugin is located, can be a path. See https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protocols for details ---@alias MinPlugins table<PluginName, PluginUrl> ---Do the initial setup. Downloads plugins, ensures the minimal init does not pollute the filesystem by keeping ---everything self contained to the CWD of the minimal init file. Run prior to running tests, reproducing issues, etc. ---@param plugins? table<PluginName, PluginUrl> function M.setup(plugins) vim.opt.packpath = {} -- Empty the package path so we use only the plugins specified vim.opt.runtimepath:append(M.root(".min")) -- Ensure the runtime detects the root min dir -- Install required plugins if plugins ~= nil then for plugin_name, plugin_url in pairs(plugins) do M.load_plugin(plugin_name, plugin_url) end end vim.env.XDG_CONFIG_HOME = M.root("xdg/config") vim.env.XDG_DATA_HOME = M.root("xdg/data") vim.env.XDG_STATE_HOME = M.root("xdg/state") vim.env.XDG_CACHE_HOME = M.root("xdg/cache") -- NOTE: Cleanup the xdg cache on exit so new runs of the minimal init doesn't share any previous state, e.g. shada vim.api.nvim_create_autocmd("VimLeave", { callback = function() vim.fn.system({ "rm", "-r", "-f", M.root("xdg") }) end }) end -- NOTE: If you have additional plugins you need to install to reproduce your issue, include them in the plugins -- table within the setup call below. M.setup({ plenary = "https://github.com/nvim-lua/plenary.nvim.git", telescope = "https://github.com/nvim-telescope/telescope.nvim", diffview = "https://github.com/sindrets/diffview.nvim", neogit = "https://github.com/NeogitOrg/neogit" }) -- WARN: Do all plugin setup, test runs, reproductions, etc. AFTER calling setup with a list of plugins! -- Basically, do all that stuff AFTER this line. require("neogit").setup({}) -- For instance, setup Neogit I believe you want opts = {} instead of config = true :) I managed to remove this error by updating neovim to the latest version :-)
gharchive/issue
2024-05-29T19:38:16
2025-04-01T06:37:18.001134
{ "authors": [ "CKolkey", "astrelinvl" ], "repo": "NeogitOrg/neogit", "url": "https://github.com/NeogitOrg/neogit/issues/1340", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
502921126
Quantization dequanitze flag Hi, while looking at the quantization function I found that there is a dequantize flag. But as I think, the network should be forwarded to the dequantized value. What's the purpose of the dequantize flag? In other words, what's the difference between dequantized and non-dequantized cases? Thank you. I'm not sure what you mean by "the network should be forwarded to the dequantized value", but in any case, this is a utility function called from many places: In some cases we want to just get the effect of quantization but still let the model run with float tensors (aka "fake-quantization"). This is common in quantization-aware training scenarios. In these cases we'll set dequantize=True. In other cases we want to quantize a tensor and keep working on the quantized values. This happens in post-training quantization. That's what I exactly wanted to know. Thank you so much
gharchive/issue
2019-10-05T05:54:57
2025-04-01T06:37:18.061334
{ "authors": [ "guyjacob", "hoseung2" ], "repo": "NervanaSystems/distiller", "url": "https://github.com/NervanaSystems/distiller/issues/398", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
467941195
[MLIR] Concat Adds support for Concat to MLIR compiler/lowerer. All but two tests in CPU.concat* are passing. The two that are failing are triggering this check in compiled_kernel.cpp: https://github.com/NervanaSystems/ngraph/blob/3f6cda4c22636fb5c0b7d2038eb4676712504671/src/ngraph/op/experimental/compiled_kernel.cpp#L78 I think this failure is unrelated to this PR, other than the fact that this PR exposes these tests to the MLIR/compiled_kernel machinery. Example dump thru affine dialect: [ RUN ] CPU.concat_5d *** nGraph Dialect Dump: *** func @main(%arg0: !ng.tensor<2x3x3x3x2xf32>, %arg1: !ng.tensor<2x3x2x3x2xf32>, %arg2: !ng.tensor<2x3x4x3x2xf32>) -> !ng.tensor<2x3x9x3x2xf32> { %0 = "ng.concat"(%arg2, %arg0, %arg1) {concatenation_axis = 2 : i64} : (!ng.tensor<2x3x4x3x2xf32>, !ng.tensor<2x3x3x3x2xf32>, !ng.tensor<2x3x2x3x2xf32>) -> !ng.tensor<2x3x9x3x2xf32> "ng.return"(%0) : (!ng.tensor<2x3x9x3x2xf32>) -> () } *** Affine Dialect Dump: *** #map0 = () -> (0) #map1 = () -> (2) #map2 = () -> (3) #map3 = () -> (4) #map4 = (d0) -> (d0 + 4) #map5 = (d0) -> (d0 + 7) func @main(%arg0: memref<2x3x3x3x2xf32>, %arg1: memref<2x3x2x3x2xf32>, %arg2: memref<2x3x4x3x2xf32>, %arg3: memref<2x3x9x3x2xf32>, %arg4: index) { affine.for %i0 = 0 to 2 { affine.for %i1 = 0 to 3 { affine.for %i2 = 0 to 4 { affine.for %i3 = 0 to 3 { affine.for %i4 = 0 to 2 { %0 = load %arg2[%i0, %i1, %i2, %i3, %i4] : memref<2x3x4x3x2xf32> store %0, %arg3[%i0, %i1, %i2, %i3, %i4] : memref<2x3x9x3x2xf32> } } } } } affine.for %i5 = 0 to 2 { affine.for %i6 = 0 to 3 { affine.for %i7 = 0 to 3 { %1 = affine.apply #map4(%i7) affine.for %i8 = 0 to 3 { affine.for %i9 = 0 to 2 { %2 = load %arg0[%i5, %i6, %i7, %i8, %i9] : memref<2x3x3x3x2xf32> store %2, %arg3[%i5, %i6, %1, %i8, %i9] : memref<2x3x9x3x2xf32> } } } } } affine.for %i10 = 0 to 2 { affine.for %i11 = 0 to 3 { affine.for %i12 = 0 to 2 { %3 = affine.apply #map5(%i12) affine.for %i13 = 0 to 3 { affine.for %i14 = 0 to 2 { %4 = load %arg1[%i10, %i11, %i12, %i13, %i14] : memref<2x3x2x3x2xf32> store %4, %arg3[%i10, %i11, %3, %i13, %i14] : memref<2x3x9x3x2xf32> } } } } } return } You can remove the compiled_kernel assert. It is left-over from previous purpose of that node. Nodes obviously don't need to be of same type/shape. @dcaballe Regarding tests: $ NGRAPH_MLIR=1 test/unit-test --gtest_filter='CPU.*concat*' Note: Google Test filter = CPU.*concat* [==========] Running 16 tests from 1 test case. [----------] Global test environment set-up. [----------] 16 tests from CPU [ RUN ] CPU.concat_matrix_colwise [ OK ] CPU.concat_matrix_colwise (12 ms) [ RUN ] CPU.concat_matrix_rowwise [ OK ] CPU.concat_matrix_rowwise (9 ms) [ RUN ] CPU.concat_matrix_int64 [ OK ] CPU.concat_matrix_int64 (8 ms) [ RUN ] CPU.concat_vector [ OK ] CPU.concat_vector (6 ms) [ RUN ] CPU.concat_4d_tensor [ OK ] CPU.concat_4d_tensor (9 ms) [ RUN ] CPU.concat_2d_tensor [ OK ] CPU.concat_2d_tensor (7 ms) [ RUN ] CPU.concat_in_place_2d_tensor [ OK ] CPU.concat_in_place_2d_tensor (9 ms) [ RUN ] CPU.concat_in_place_propagate_2d_tensor /localdisk/amprocte/Work/ngraph/build/test/backend_test_CPU.cpp:1031: Failure Value of: test::all_close_f( (vector<float>{3, 7, 2}), read_vector<float>(result), MIN_FLOAT_TOLERANCE_BITS) Actual: false (7 is not close to 2 at index 1 2 is not close to 0 at index 2 diff count: 2 out of 3 passing criteria - mismatch allowed @ mantissa bit: 24 or later (0 tolerance bits) 0 value(s) below min_signal: 0 tightest match - mismatch occurred @ mantissa bit: 24 or next bit (3 vs 3 at [0]) loosest match - mismatch occurred @ mantissa bit: 0 or next bit (2 vs 0 at [2]) median match - mismatch occurred @ mantissa bit: 0 or next bit ) Expected: true [ FAILED ] CPU.concat_in_place_propagate_2d_tensor (9 ms) [ RUN ] CPU.concat_5d [ OK ] CPU.concat_5d (53 ms) [ RUN ] CPU.concat_zero_length_1d_last [ OK ] CPU.concat_zero_length_1d_last (5 ms) [ RUN ] CPU.concat_zero_length_1d_middle [ OK ] CPU.concat_zero_length_1d_middle (5 ms) [ RUN ] CPU.concat_zero_zero [ OK ] CPU.concat_zero_zero (1 ms) [ RUN ] CPU.concat_zero_length_4d_middle [ OK ] CPU.concat_zero_length_4d_middle (9 ms) [ RUN ] CPU.backwards_concat_vector [ OK ] CPU.backwards_concat_vector (8 ms) [ RUN ] CPU.backwards_concat_axis_0 [ OK ] CPU.backwards_concat_axis_0 (11 ms) [ RUN ] CPU.backwards_concat_axis_1 [ OK ] CPU.backwards_concat_axis_1 (134 ms) [----------] 16 tests from CPU (295 ms total) [----------] Global test environment tear-down [==========] 16 tests from 1 test case ran. (296 ms total) [ PASSED ] 15 tests. [ FAILED ] 1 test, listed below: [ FAILED ] CPU.concat_in_place_propagate_2d_tensor 1 FAILED TEST 17:33:02.208622: [MAIN] Tests finished: Time: 297 ms Exit code: 1 With NGRAPH_MLIR_DUMP_ALL=1, all the above are dumping reasonable-looking affine code. The failed test is probably due to the issue we've been discussing on Slack. @dcaballe @nmostafa I think comments are addressed at this point. Let me know.
gharchive/pull-request
2019-07-15T04:58:29
2025-04-01T06:37:18.066498
{ "authors": [ "aprocter", "nmostafa" ], "repo": "NervanaSystems/ngraph", "url": "https://github.com/NervanaSystems/ngraph/pull/3225", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1586479805
Revisit counter dash to underscore replacement logic As raised by jfong on Discord This logic happens here Replacement of dash to underscore happens automatically when you don't use the rename => marker, and yes, that behavior is used extensively in zapi templates. Perhaps the idea is if you rename, you get the final call to decide on the name (even if it's wrong). That may be questionable 🙂 As noted Prometheus does not permit dashes - in labels. ONTAP REST documentation states: However, REST converts hyphens - in CLI parameter names to underscores _ in the REST API JSON response body. We have investigated on this issue and below is the summary: --> Prometheus would not supporting the hyphen (-) in metric/label names and erroring out, Whereas InfluxDB can support the hyphen(-). --> Harvest's behaviour would be if we have the rename marker => in the template, then Harvest will honour that display name as-is. If we don't have the rename marker => in template, then Harvest will try to help and generate display name accordingly. The solution which we all are agreed on is, if the rename marker => exist in template, Then Harvest should not intervene and honour the display name as-is.
gharchive/issue
2023-02-15T20:05:38
2025-04-01T06:37:18.076764
{ "authors": [ "Hardikl", "cgrinds" ], "repo": "NetApp/harvest", "url": "https://github.com/NetApp/harvest/issues/1738", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
224272212
Need help Now my screen rotated on my laptop My Toshiba laptop his rotated the screen upside down by its self
gharchive/issue
2017-04-25T21:06:18
2025-04-01T06:37:18.090022
{ "authors": [ "whonow" ], "repo": "NetDocuments/ad-join-cookbook", "url": "https://github.com/NetDocuments/ad-join-cookbook/issues/24", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
308806368
WIN环境下,执行pomelo add/restart等操作都会报错 是什么问题呢??? 我在mac和linux服务器上 pomelo init 一个新的工程之后跑起来。操作pomelo add 去添加一个connector服务器 ,此时会提示 Successfully add server。查看列表之后添加成功。 换win环境下 同样的POMELO 以及 Node版本条件下,执行同样的操作 pomelo add 报错【 { message: 'Port occupied already, check your server to add.', stack: 'Error: Port occupied already, check your server to add.\n at H:\win\TestPomeloDemo\game-server\node_modules\pomelo\lib\modules\console.js:360:32\n at Object.utils.invokeCallback (H:\win\TestPomeloDemo\game-server\node_modules\pomelo\lib\util\utils.js:21:14)\n at H:\win\TestPomeloDemo\game-server\node_modules\pomelo\lib\modules\console.js:300:13\n at ChildProcess.exithandler (child_process.js:209:5)\n at emitTwo (events.js:100:13)\n at ChildProcess.emit (events.js:185:7)\n at maybeClose (internalild_process.js:850:16)\n at Socket. (internalild_process.js:323:11)\n at emitOne (events.js:90:13)\n at Socket.emit (events.js:182:7)' } 】 但是我查看过端口我读是用的没有使用过的。 有碰到这种情况的老铁么?? 因为 只支持 linux环境。 let checkPort = function (server: ServerInfo, cb: MasterCallback) { if (!server.port && !server.clientPort) { utils.invokeCallback(cb, 'leisure'); return; } let p = server.port || server.clientPort; let host = server.host; let cmd = 'netstat -tln | grep '; if (!utils.isLocal(host)) { cmd = 'ssh ' + host + ' ' + cmd; } exec(cmd + p, function (err, stdout, stderr) { if (stdout || stderr) { utils.invokeCallback(cb, 'busy'); } else { p = server.clientPort; exec(cmd + p, function (err, stdout, stderr) { if (stdout || stderr) { utils.invokeCallback(cb, 'busy'); } else { utils.invokeCallback(cb, 'leisure'); } }); } }); }; 看这一行 let cmd = 'netstat -tln | grep '; 修改一下这个 var cmd = os.type() === 'Windows_NT' ? netstat -ano | %windir%\\system32\\find.exe "${p}" : netstat -tln | grep ${p}; 我在用 @node-pinus 这个pomelo简直就是坑套这坑,这个问题可以暴力解决。 找到pomelo模块:/node_modules/pomelo/lib/modules/console.js 找到 runServer 方法: var runServer = function(app, server, cb) { checkPort(server, function(status) { if(status === 'busy') { utils.invokeCallback(cb, new Error('Port occupied already, check your server to add.')); } else { starter.run(app, server, function(err) { if(err) { utils.invokeCallback(cb, new Error(err), null); return; } }); process.nextTick(function() { utils.invokeCallback(cb, null, { status: "ok" }); }); } }); }; checkPort在window下面有问题的,直接把checkPort去掉,check个屁啊罗里吧嗦的,暴力解决 var runServer = function(app, server, cb) { starter.run(app, server, function(err) { if(err) { utils.invokeCallback(cb, new Error(err), null); return; } }); process.nextTick(function() { utils.invokeCallback(cb, null, { status: "ok" }); }); }; 修改一下这个 var cmd = os.type() === 'Windows_NT' ? `netstat -ano | %windir%\\system32\\find.exe "${p}"` : `netstat -tln | grep ${p}`; 我也不知道项目组的人是懒还是怎么的,反正没人管了 find 老是报find:参数格式不正确,我该findstr就正常了
gharchive/issue
2018-03-27T03:09:48
2025-04-01T06:37:18.098611
{ "authors": [ "Alisheng", "flamefox", "npcode", "whtiehack", "zhongG" ], "repo": "NetEase/pomelo", "url": "https://github.com/NetEase/pomelo/issues/994", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2148468429
演示项目出现的问题 你好,caddy、沙盒、本项目这些都准备好了,但是运行起来有几个问题,页面也和本github宣传的不同,麻烦指导一下。 对了,还有个问题是,如何让项目发布的页面用于RN?有详细的教程吗? 开源版本目前只提供了一个组件库的接入示例 https://github.com/NetEase/tango-components,没有提供完整的组件库,内部的组件库节藕工作工作量比较大,我们会在合适的时机将组件库部分合并到开源代码中。 对了,还有个问题是,如何让项目发布的页面用于RN?有详细的教程吗? RN 部分涉及到 Native 的相关能力,此部分不在开源范围 开源版本目前只提供了一个组件库的接入示例,没有提供完整的组件库,内部的组件库节藕工作工作量比较大,我们会在合适的时机将组件库部分合并到开源代码中。 是的,开源版本好多功能好像没有,希望提供一个完整的,希望是:操作完平台,H5端、mobile端同时可以看到效果的。 对了,还有个问题是,如何让项目发布的页面用于RN?有详细的教程吗? RN 部分涉及到 Native 的相关能力,此部分不在开源范围 开源出来的能力暂时不能支持生成RN页面吗? 开源版本目前只提供了一个组件库的接入示例,没有提供完整的组件库,内部的组件库节藕工作工作量比较大,我们会在合适的时机将组件库部分合并到开源代码中。 是的,开源版本好多功能好像没有,希望提供一个完整的,希望是:操作完平台,H5端、mobile端同时可以看到效果的。 💪 正在推进中,涉及的工作比较多,计划会陆续开源出来
gharchive/issue
2024-02-22T08:09:17
2025-04-01T06:37:18.105219
{ "authors": [ "FKLam", "Thea1211", "wwsun" ], "repo": "NetEase/tango", "url": "https://github.com/NetEase/tango/issues/108", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
194085021
Question regarding maximumSize and coreSize I have set the maximum size of thread pool to be 10 and the core size as 2. The requests are getting queued(if more than 2 requests are made) instead of creating a new thread on the threadpool. As far as my knowledge, the thread pool must support 10 threads to the maximum and then queue the subsequent requests. If the number of requests is less, and the other 8 threads on the threadpool are not assigned any tasks,then they must wait for utmost 1 minute to get destroyed (But the thread pool will still contain 2 daemon threads even if no requests are made). But this is not happening in my case. Kindly advise if my understanding is right! Can you retry with 1.5.9 (just released)? If the issue still exists, I can dig in. @mattrjacobs Still this issue is persisting. I tried using 1.5.9. Kindly suggest. Can you inspect the configuration of the running system to see what values are being used? You can use the Hystrix configuration stream or do something manual by inspecting Hystrix objects and logging values out. Hi @mattrjacobs, Thanks for your help all the way,its working fine with latest version. I forgot to reconfigure after updating the hystrix jar to 1.5.9. Closing this issue 👍 @mohamedanees : I have the same problem with dynamic hystrix thread pool settings, not working expected. I am using hystrix 1.5.12 (maximumSize=100, coreSize=60 and allowMaximumSizeToDivergeFromCoreSize=true). Can you let me know what is that you forgot to reconfigure to fix the issue??? @karteeksaragad - I'm facing similar issue with 1.5.12. Did it work for you? @mattrjacobs
gharchive/issue
2016-12-07T15:46:33
2025-04-01T06:37:18.130548
{ "authors": [ "karteeksaragad", "mattrjacobs", "mohamedanees", "prakhar241" ], "repo": "Netflix/Hystrix", "url": "https://github.com/Netflix/Hystrix/issues/1436", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
427880551
HystrixSampleSseServlet & nio exceptions Faced with randomly but regularly exceptions in logs happened for HystrixSampleSseServlet spring.boot 2.1.2.RELEASE and hystrix 1.5.18 java.nio.BufferOverflowException at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:363) at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:342) at sun.nio.ch.IOUtil.write(IOUtil.java:60) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.tomcat.util.net.NioChannel.write(NioChannel.java:134) at org.apache.tomcat.util.net.NioBlockingSelector.write(NioBlockingSelector.java:101) at org.apache.tomcat.util.net.NioSelectorPool.write(NioSelectorPool.java:144) at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.doWrite(NioEndpoint.java:1225) at org.apache.tomcat.util.net.SocketWrapperBase.doWrite(SocketWrapperBase.java:743) at org.apache.tomcat.util.net.SocketWrapperBase.flushBlocking(SocketWrapperBase.java:696) at org.apache.tomcat.util.net.SocketWrapperBase.flush(SocketWrapperBase.java:686) at org.apache.coyote.http11.Http11OutputBuffer$SocketOutputBuffer.flush(Http11OutputBuffer.java:553) at org.apache.coyote.http11.filters.ChunkedOutputFilter.flush(ChunkedOutputFilter.java:157) at org.apache.coyote.http11.Http11OutputBuffer.flush(Http11OutputBuffer.java:216) at org.apache.coyote.http11.Http11Processor.flush(Http11Processor.java:1149) at org.apache.coyote.AbstractProcessor.action(AbstractProcessor.java:394) at org.apache.coyote.Response.action(Response.java:209) at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:294) at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:261) at org.apache.catalina.connector.CoyoteWriter.flush(CoyoteWriter.java:94) at com.netflix.hystrix.contrib.sample.stream.HystrixSampleSseServlet.handleRequest(HystrixSampleSseServlet.java:168) at com.netflix.hystrix.contrib.sample.stream.HystrixSampleSseServlet.doGet(HystrixSampleSseServlet.java:74) at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) at javax.servlet.http.HttpServlet.service(HttpServlet.java:741) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:834) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1417) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) java.nio.InvalidMarkException at java.nio.Buffer.reset(Buffer.java:306) at org.apache.catalina.connector.OutputBuffer.toReadMode(OutputBuffer.java:831) at org.apache.catalina.connector.OutputBuffer.transfer(OutputBuffer.java:805) at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:508) at org.apache.catalina.connector.CoyoteWriter.write(CoyoteWriter.java:170) at org.apache.catalina.connector.CoyoteWriter.write(CoyoteWriter.java:180) at org.apache.catalina.connector.CoyoteWriter.print(CoyoteWriter.java:238) at com.netflix.hystrix.contrib.sample.stream.HystrixSampleSseServlet.handleRequest(HystrixSampleSseServlet.java:163) at com.netflix.hystrix.contrib.sample.stream.HystrixSampleSseServlet.doGet(HystrixSampleSseServlet.java:74) at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) at javax.servlet.http.HttpServlet.service(HttpServlet.java:741) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:834) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1417) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) java.lang.IllegalArgumentException at java.nio.Buffer.position(Buffer.java:244) at sun.nio.cs.UTF_8.updatePositions(UTF_8.java:77) at sun.nio.cs.UTF_8.access$200(UTF_8.java:57) at sun.nio.cs.UTF_8$Encoder.encodeArrayLoop(UTF_8.java:636) at sun.nio.cs.UTF_8$Encoder.encodeLoop(UTF_8.java:691) at java.nio.charset.CharsetEncoder.encode(CharsetEncoder.java:579) at org.apache.tomcat.util.buf.C2BConverter.convert(C2BConverter.java:169) at org.apache.catalina.connector.OutputBuffer.realWriteChars(OutputBuffer.java:450) at org.apache.catalina.connector.OutputBuffer.flushCharBuffer(OutputBuffer.java:820) at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:307) at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:284) at org.apache.catalina.connector.CoyoteWriter.flush(CoyoteWriter.java:94) at com.netflix.hystrix.contrib.sample.stream.HystrixSampleSseServlet.handleRequest(HystrixSampleSseServlet.java:168) at com.netflix.hystrix.contrib.sample.stream.HystrixSampleSseServlet.doGet(HystrixSampleSseServlet.java:74) at javax.servlet.http.HttpServlet.service(HttpServlet.java:635) at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) at org.springframework.web.servlet.mvc.ServletWrappingController.handleRequestInternal(ServletWrappingController.java:157) at org.springframework.web.servlet.mvc.AbstractController.handleRequest(AbstractController.java:174) at org.springframework.cloud.netflix.endpoint.ServletWrappingEndpoint.handle(ServletWrappingEndpoint.java:76) at sun.reflect.GeneratedMethodAccessor459.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) at org.springframework.boot.actuate.autoconfigure.EndpointWebMvcChildContextConfiguration$CompositeHandlerAdapter.handle(EndpointWebMvcChildContextConfiguration.java:286) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861) at javax.servlet.http.HttpServlet.service(HttpServlet.java:635) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846) at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:478) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:799) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1457) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) java.lang.IllegalArgumentException: Negative capacity: -105 at java.nio.Buffer.<init>(Buffer.java:199) at java.nio.CharBuffer.<init>(CharBuffer.java:281) at java.nio.HeapCharBuffer.<init>(HeapCharBuffer.java:86) at java.nio.HeapCharBuffer.slice(HeapCharBuffer.java:103) at org.apache.catalina.connector.OutputBuffer.flushCharBuffer(OutputBuffer.java:763) at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:284) at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:261) at org.apache.catalina.connector.CoyoteWriter.flush(CoyoteWriter.java:94) at org.apache.catalina.connector.CoyoteWriter.checkError(CoyoteWriter.java:119) at com.netflix.hystrix.contrib.sample.stream.HystrixSampleSseServlet.handleRequest(HystrixSampleSseServlet.java:165) at com.netflix.hystrix.contrib.sample.stream.HystrixSampleSseServlet.doGet(HystrixSampleSseServlet.java:74) at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) at javax.servlet.http.HttpServlet.service(HttpServlet.java:741) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:834) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1417) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) me too We're getting this but without boot, just a regular spring-web app using hystrix-core 1.5.18. I am facing the same issue. The exception is thrown while the "/actuator/hystrix.stream" endpoint is called. Please, give a feedback if you found the cause. Does this patch fix it? https://github.com/Netflix/Hystrix/pull/1757#issuecomment-520303929
gharchive/issue
2019-04-01T19:47:23
2025-04-01T06:37:18.137404
{ "authors": [ "IrishkA13", "Nalha", "Shoohei", "henrik242", "realred" ], "repo": "Netflix/Hystrix", "url": "https://github.com/Netflix/Hystrix/issues/1934", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
105685066
Only lookup if the request cache is enabled once per command invocation This helps performance and eliminates chance that start/end of command see this value differently NetflixOSS » Hystrix » Hystrix-pull-requests #169 SUCCESS This pull request looks good
gharchive/pull-request
2015-09-09T21:09:13
2025-04-01T06:37:18.139452
{ "authors": [ "cloudbees-pull-request-builder", "mattrjacobs" ], "repo": "Netflix/Hystrix", "url": "https://github.com/Netflix/Hystrix/pull/886", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
106669822
Upgrade tests to use JMockit 1.19, which is compatible with newer ver… …sions of Java; also fix CassandraAdminTest failure caused as regression in 49865ab. Addresses the comment in issue #415 NetflixOSS » Priam » Priam-pull-requests #25 SUCCESS This pull request looks good Priam-pull-requests #187 SUCCESS This pull request looks good
gharchive/pull-request
2015-09-15T23:42:28
2025-04-01T06:37:18.141484
{ "authors": [ "cloudbees-pull-request-builder", "scottmcmaster" ], "repo": "Netflix/Priam", "url": "https://github.com/Netflix/Priam/pull/421", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2156638904
fix: avoid crash when navigation keys used with an empty console using navigation keys (up, down) with an empty console results in a panic with out-of-band error. @mtb0x1 Thanks for this fix. I ran into the same problem.
gharchive/pull-request
2024-02-27T13:47:15
2025-04-01T06:37:18.142453
{ "authors": [ "mtb0x1", "nickodell" ], "repo": "Netflix/bpftop", "url": "https://github.com/Netflix/bpftop/pull/12", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
734836186
go client: No Domain when polliing for task When using go client with an empty string domain, it's creating below GET call, which I think Conductor server does not like. Removing domain=& from the call works for polling. Is this is a known bug? 2020/11/02 14:19:23 Sending [ GET ] request to Server ( http://localhost:8080/api/tasks/poll/get_model?workerid=C02&domain=& ): 2020/11/02 14:19:23 Body: 2020/11/02 14:19:23 2020/11/02 14:19:24 Received response from Server ( http://localhost:8080/api ): 2020/11/02 14:19:24 Status: 204 No Content 2020/11/02 14:19:24 Response: 2020/11/02 14:19:24 Go client github.com/netflix/conductor/client/go v0.0.0-20201029170655-6bce3912d3ce @djcass44 Could you please help with this potential issue in the go client? Thank you. @apanicker-nflx I've implemented a possible fix in #1976, are you able to take a look?
gharchive/issue
2020-11-02T21:28:57
2025-04-01T06:37:18.144865
{ "authors": [ "apanicker-nflx", "djcass44", "rmohta" ], "repo": "Netflix/conductor", "url": "https://github.com/Netflix/conductor/issues/1952", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
207743496
SampleWorker: "Error when pollig for task" with JsonMappingException Hello, I'm trying the SampleWorker at: [https://github.com/Netflix/conductor/tree/dev/client/src/test/java/com/netflix/conductor/client/sample] I started an instance of kitchensink, and tried to poll task_1 using SampleWorker. However, I got the following errors. I am working on Win 7. Please help. Thanks. [pool-1-thread-1] ERROR com.netflix.conductor.client.task.WorkflowTaskCoordinator - Error when pollig for task java.lang.RuntimeException: com.sun.jersey.api.client.ClientHandlerException: com.fasterxml.jackson.databind.JsonMappingException: Conflicting setter definitions for property "type": com.netflix.conductor.common.metadata.workflow.WorkflowTask#setType(1 params) vs com.netflix.conductor.common.metadata.workflow.WorkflowTask#setType(1 params) at com.netflix.conductor.client.http.ClientBase.handleException(ClientBase.java:189) at com.netflix.conductor.client.http.ClientBase.getForEntity(ClientBase.java:179) at com.netflix.conductor.client.http.TaskClient.poll(TaskClient.java:88) at com.netflix.conductor.client.task.WorkflowTaskCoordinator.pollForTask(WorkflowTaskCoordinator.java:208) at com.netflix.conductor.client.task.WorkflowTaskCoordinator.lambda$2(WorkflowTaskCoordinator.java:173) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask.runAndReset(Unknown Source) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown Source) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: com.sun.jersey.api.client.ClientHandlerException: com.fasterxml.jackson.databind.JsonMappingException: Conflicting setter definitions for property "type": com.netflix.conductor.common.metadata.workflow.WorkflowTask#setType(1 params) vs com.netflix.conductor.common.metadata.workflow.WorkflowTask#setType(1 params) at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:644) at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:604) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:698) at com.sun.jersey.api.client.WebResource.access$300(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:514) at com.netflix.conductor.client.http.ClientBase.getForEntity(ClientBase.java:176) ... 10 more Caused by: com.fasterxml.jackson.databind.JsonMappingException: Conflicting setter definitions for property "type": com.netflix.conductor.common.metadata.workflow.WorkflowTask#setType(1 params) vs com.netflix.conductor.common.metadata.workflow.WorkflowTask#setType(1 params) at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCache2(DeserializerCache.java:270) at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCacheValueDeserializer(DeserializerCache.java:245) at com.fasterxml.jackson.databind.deser.DeserializerCache.findValueDeserializer(DeserializerCache.java:143) at com.fasterxml.jackson.databind.DeserializationContext.findContextualValueDeserializer(DeserializationContext.java:406) at com.fasterxml.jackson.databind.deser.std.StdDeserializer.findDeserializer(StdDeserializer.java:882) at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.resolve(BeanDeserializerBase.java:436) at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCache2(DeserializerCache.java:297) at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCacheValueDeserializer(DeserializerCache.java:245) at com.fasterxml.jackson.databind.deser.DeserializerCache.findValueDeserializer(DeserializerCache.java:143) at com.fasterxml.jackson.databind.DeserializationContext.findContextualValueDeserializer(DeserializationContext.java:406) at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.createContextual(CollectionDeserializer.java:164) at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.createContextual(CollectionDeserializer.java:25) at com.fasterxml.jackson.databind.DeserializationContext.handleSecondaryContextualization(DeserializationContext.java:653) at com.fasterxml.jackson.databind.DeserializationContext.findRootValueDeserializer(DeserializationContext.java:444) at com.fasterxml.jackson.databind.ObjectReader._findRootDeserializer(ObjectReader.java:1566) at com.fasterxml.jackson.databind.ObjectReader._bind(ObjectReader.java:1405) at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:860) at com.fasterxml.jackson.jaxrs.base.ProviderBase.readFrom(ProviderBase.java:811) at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:634) ... 15 more Caused by: java.lang.IllegalArgumentException: Conflicting setter definitions for property "type": com.netflix.conductor.common.metadata.workflow.WorkflowTask#setType(1 params) vs com.netflix.conductor.common.metadata.workflow.WorkflowTask#setType(1 params) at com.fasterxml.jackson.databind.introspect.POJOPropertyBuilder.getSetter(POJOPropertyBuilder.java:293) at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.filterBeanProps(BeanDeserializerFactory.java:583) at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.addBeanProps(BeanDeserializerFactory.java:479) at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.buildBeanDeserializer(BeanDeserializerFactory.java:220) at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.createBeanDeserializer(BeanDeserializerFactory.java:143) at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer2(DeserializerCache.java:409) at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer(DeserializerCache.java:358) at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCache2(DeserializerCache.java:265) ... 33 more @niwy The error indicates the incompatibility between the version of jackson mapper used by conductor and your runtime JVM. We recommend using 2.7.5 version of jackson-* libraries in your classpath. @v1r3n Thanks for replying. I'm using 2.7.5 jackson, and the Java version is 1.8.0_121. java version "1.8.0_121" Java(TM) SE Runtime Environment (build 1.8.0_121-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) I tired different versions of jackson, e.g., 2.6.7, 2.7.4, 2.8.6. Besides, I exported a jar to run a linux machine with java 1.8.0_101. Anyhow, the errors remain. @niwy I have renamed the offending method (it was on the TODO list already). Can you give it a try with the dev branch again? Let me know if the problem persists. @v1r3n It works now. Thanks a lot.
gharchive/issue
2017-02-15T09:01:11
2025-04-01T06:37:18.153059
{ "authors": [ "niwy", "v1r3n" ], "repo": "Netflix/conductor", "url": "https://github.com/Netflix/conductor/issues/83", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1905377746
Timeline enhancements This PR includes many enhancements to the incident timeline, including: Direct link to the timeline for an incident. Users can now browse to .../{org_name}/incidents/{incident_name}/timeline to open the incident view directly to the timeline. The incident details view pane is now expandable by a click and drag on the vertical divider line. Icons in the timeline bubbles quickly indicate the type of event. The incident timeline now has filtering to show/hide different kinds of events, with system events hidden by default. Users can now add custom events to the timeline directly in the UI by hovering between existing events in the timeline and click the "Add event" button. Custom events can be edited and/or deleted by the incident commander or reporter. Messages imported from Slack with the reaction event can now be edited and/or deleted by the incident commander or reporter. One thing that stands out form a UX perspective in the timeline is user actions vs system actions look very similar. I think it'd help to differentiate them more. For example, the icon for the user actions could be the users profile picture and names could be bolded/clickable with the user popover component we have. A minor thing from last screenshot -- the delete / edit icons probably fit best positioned all the way right, under the date. that way they wont cover the text or interfere with the content if it is rich (hoverable/clickable) in the future One thing that stands out form a UX perspective in the timeline is user actions vs system actions look very similar. I think it'd help to differentiate them more. For example, the icon for the user actions could be the users profile picture and names could be bolded/clickable with the user popover component we have. Will pull these ideas into a phase 2, which will align the entire Incident edit/view with the new Case UI.
gharchive/pull-request
2023-09-20T16:45:52
2025-04-01T06:37:18.159628
{ "authors": [ "whitdog47", "wssheldon" ], "repo": "Netflix/dispatch", "url": "https://github.com/Netflix/dispatch/pull/3795", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
584574408
link two flows Not sure if I just haven't found the right verbage in the documentations, or if there are other built in ways to do this but I want to be able to prospectively link two flows together. For example, let's say there is FlowA and FlowB, and FlowA for example pre-processes data that gets utilized in FlowB. Is there a way I can make sure that I know which data from FlowA was utilized by a certain run in FlowB? One way I guess would be to tag both flows with the same tag or unique ID, does anyone know how to programmatically tag a flow within itself? Should I just combine the two flows into one? Thanks @themantalope Tagging is definitely one approach. How are you currently accessing data from FlowA in your FlowB? You could simply store the relevant run id for FlowA in FlowB and access it using the Client API. We have #159 open which will provide programmatic capabilities for tagging a flow retrospectively. I see, so in the first step of FlowB I could do something link from metaflow import FlowSpec, Flow class FlowB(FlowSpec): @step def start(self): self.last_flow_a_run_id = Flow('FlowA').latest_run.run_id // rest of code etc. And when the data from the FlowB is stored, the data in last_flow_a_run_id will get saved to disk, correct? Yep, correct. Great! Really appreciate it!
gharchive/issue
2020-03-19T17:22:10
2025-04-01T06:37:18.164080
{ "authors": [ "savingoyal", "themantalope" ], "repo": "Netflix/metaflow", "url": "https://github.com/Netflix/metaflow/issues/162", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
535749389
Strange behaviour matching request modified in beforePersist Description I think this is a bug but there's always a chance it's a feature and I haven't understood something. I'm testing an authentication flow and I don't want the real auth tokens to be committed to the repo. I've set up a beforePersist middleware to replace real-token with dummy-token in the POST bodies. When recording I make the requests with real-token. Then when running my tests against the recordings I make the requests with dummy-token. I would expect this to work because the real-token was changed to dummy-token when saving the HAR. But it doesn't: the request doesn't match. The really strange thing is that if I run the test against the recordings but using the original real-token that I used when making the recordings it passes, even though that token doesn't appear in the HAR file. How does Polly remember the original token even when I've removed it from the HAR? I've created a minimal test case here https://github.com/tamlyn/polly-test-case to illustrate the problem. Node.js v12.13.1 darwin 17.7.0 npm 6.11.3 yarn 1.19.2 @tamlyn even though you're storing the dummy-token in the HAR file, you're still making the request with the real-token. Polly calculates the request's id based on the request you made (not whats stored in the HAR), so you would need to tell Polly to ignore the token altogether. You can do so via matchRequestsBy. Ah! So the _id is a hash of the request? I thought it was random. Is that documented anywhere? Is the request object as stored in the HAR used for anything? So the _id is a hash of the request Correct Is that documented anywhere? https://netflix.github.io/pollyjs/#/configuration?id=matchrequestsby Is the request object as stored in the HAR used for anything? For completeness with the HAR spec (importing into a HAR tool to inspect for example). In the future, we may leverage it more.
gharchive/issue
2019-12-10T14:01:29
2025-04-01T06:37:18.170291
{ "authors": [ "jasonmit", "offirgolan", "tamlyn" ], "repo": "Netflix/pollyjs", "url": "https://github.com/Netflix/pollyjs/issues/279", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
209222176
Configuring PoolingHttpClientConnectionManager with Time to Live? Hi all, I am using zuul to route requests to statically configured services according to the request path. The services behind zuul are addressed using DNS names. The DNS records can change rather frequently (e.g., daily). I have seen in SimpleHostRoutingFilter that zuul uses PoolingHttpClientConnectionManager from Apache HttpComponents without configuring the time to live for the connections in the pool (see https://github.com/Netflix/zuul/blob/939ada9dce57709cf6b250b77fabfa37e8d988bf/zuul-simple-webapp/src/main/groovy/filters/route/SimpleHostRoutingFilter.groovy#L97). Even if the DNS records change, the connection pool in zuul is keeping the connections to the old instances open. Depending on the load, it can take many hours until all the old connections are evicted from the connection pool. I could subclass SimpleHostRoutingFilter and override the method newConnectionManager() to return my own instance of PoolingHttpClientConnectionManager with a configured TTL. Is there any other way to solve my problem, and does it make sense to add this feature to zuul? Best, Gary @GJL Hi, I am facing the same behaviour, how did you end up? @avoosh I moved the issue to https://github.com/spring-cloud/spring-cloud-netflix/issues/1720
gharchive/issue
2017-02-21T17:49:49
2025-04-01T06:37:18.174497
{ "authors": [ "GJL", "avoosh" ], "repo": "Netflix/zuul", "url": "https://github.com/Netflix/zuul/issues/307", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1111981662
Add Multi-GPU Training Multi-GPU training was added using the HuggingFace Accelerate package. Installation shell script was also updated to work with new and updated packages. @markusatkinson this is ready for review.
gharchive/pull-request
2022-01-23T20:47:18
2025-04-01T06:37:18.203342
{ "authors": [ "InProgress18", "matthewfeickert" ], "repo": "Neubauer-Group/GNN_code", "url": "https://github.com/Neubauer-Group/GNN_code/pull/1", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2725712148
Add Linux path to README Added the path for the filters on Linux. Also changed the formatting a bit (code instead if italic) and replaced the forward slashes with back slashes on Windows, since that is the canonical path separator there. Thanks, added
gharchive/pull-request
2024-12-09T01:21:02
2025-04-01T06:37:18.204255
{ "authors": [ "NeverSinkDev", "Xandaros" ], "repo": "NeverSinkDev/NeverSink-PoE2litefilter", "url": "https://github.com/NeverSinkDev/NeverSink-PoE2litefilter/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
67153779
Harvested blocks are added when switching accounts 0.6.26-BETA "When I switch between accounts with the "HARVESTED BLOCKS" tab selected the previous accounts harvested blocks gets added to the currently selected accounts harvested blocks." https://forum.ournem.com/technical-discussion/nem-beta-0-6-26-security-update/msg15902/#msg15902 Can anybody confirm the behavior with version 0.6.28?
gharchive/issue
2015-04-08T15:20:12
2025-04-01T06:37:18.205901
{ "authors": [ "BloodyRookie", "fyahfox" ], "repo": "NewEconomyMovement/NemCommunityClient", "url": "https://github.com/NewEconomyMovement/NemCommunityClient/issues/390", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2187907602
CRUD Finish MEnyelesaikan Crud mapel anjay
gharchive/pull-request
2024-03-15T07:42:20
2025-04-01T06:37:18.236031
{ "authors": [ "maulana-99" ], "repo": "Nexthrive/Nexademy-api", "url": "https://github.com/Nexthrive/Nexademy-api/pull/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1773210716
02-app - 02-api-reference - 02-file-conventions - 01-metadata - app-icons.mdx 번역 기여자용 문서 개선 PR을 열기 전에 pnpm prettier-fix를 실행하여 서식 문제를 해결합니다. - 문서 기여 가이드를 읽고 문서 지침을 따르는지 확인하세요: https://github.com/Nextjs-kr/Nextjs.ko/blob/main/packages/next/README.md Progress [x] pnpm prettier-fix [x] 번역 초안 작성 (Draft translation) [x] 공통 스타일 가이드 확인 (Check the common style guide) [x] 모범사례 확인 (Check best practices) [x] 용어 확인 (Check the term) [x] 맞춤법 검사 (Spelling check) [x] 리뷰 반영 (Resolve reviews) ref #89
gharchive/pull-request
2023-06-25T10:56:42
2025-04-01T06:37:18.240481
{ "authors": [ "sasha1107", "yoo-jimin127" ], "repo": "Nextjs-kr/Nextjs.kr", "url": "https://github.com/Nextjs-kr/Nextjs.kr/pull/256", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2649707556
ThorZIP: executable file not found in %PATH% I'm trying to run Generic.Scanner.ThorZIP but it's not working. I have uploaded the zip using ThorZIP tool, I tried both zip with included license and unmodified zip downloaded directly from website, but same problem. {"client_time":1731337531,"level":"INFO","message":"Starting query execution for Generic.Scanner.ThorZIP/ThorExec.\n"} {"client_time":1731337531,"level":"DEFAULT","message":"tempfile: removing tempfile C:\\Program Files\\Velociraptor\\Tools\\tmp391399109\n"} {"client_time":1731337531,"level":"DEFAULT","message":"tempfile: removed tempfile C:\\Program Files\\Velociraptor\\Tools\\tmp391399109\n"} {"client_time":1731337532,"level":"DEFAULT","message":"Sleeping 7 Seconds\n"} {"client_time":1731337539,"level":"DEFAULT","message":"URL for thor10.7lite-win-pack_nolic.zip is at https://xxx.azurewebsites.net/file/thor10.7lite-win-pack_nolic.zip and has hash of c1a306af9e9162d14d52374e188a8dc20005752e6c5b580e8316f2323ce7591c\n"} {"client_time":1731337539,"level":"DEFAULT","message":"Fetching https://xxx.azurewebsites.net/file/thor10.7lite-win-pack_nolic.zip\n"} {"client_time":1731337539,"level":"DEFAULT","message":"http_client: Downloading https://xxx.azurewebsites.net/file/thor10.7lite-win-pack_nolic.zip into C:\\Program Files\\Velociraptor\\Tools\\tmp1158147984.tmp\n"} {"client_time":1731337542,"level":"DEFAULT","message":"downloaded hash of C:\\Program Files\\Velociraptor\\Tools\\tmp1158147984.tmp: c1a306af9e9162d14d52374e188a8dc20005752e6c5b580e8316f2323ce7591c, expected c1a306af9e9162d14d52374e188a8dc20005752e6c5b580e8316f2323ce7591c\n"} {"client_time":1731337542,"level":"DEFAULT","message":"copy: Copying file from C:\\Program Files\\Velociraptor\\Tools\\tmp1158147984.tmp into C:\\Program Files\\Velociraptor\\Tools\\thor10.7lite-win-pack_nolic.zip\n"} {"client_time":1731337543,"level":"DEFAULT","message":"tempfile: removing tempfile C:\\Program Files\\Velociraptor\\Tools\\tmp1158147984.tmp\n"} {"client_time":1731337543,"level":"DEFAULT","message":"tempfile: removed tempfile C:\\Program Files\\Velociraptor\\Tools\\tmp1158147984.tmp\n"} {"client_time":1731337543,"level":"DEFAULT","message":"Adding global destructor for C:\\Program Files\\Velociraptor\\Tools\\tmp3319932095\n"} {"client_time":1731337543,"level":"WARN","message":"Materialize of LET Unzip: Expand larger than 1000 rows, VQL will switch to tempfile backing on C:\\Program Files\\Velociraptor\\Tools\\VQL_Unzip_.jsonl1648725754 which will be much slower.\n"} {"client_time":1731337544,"level":"DEFAULT","message":"execve: Running external command [[C:\\Program Files\\Velociraptor\\Tools\\tmp3319932095\\thor64-lite.exe --json -e C:\\Program Files\\Velociraptor\\Tools\\tmp3319932095] []]\n"} {"client_time":1731337544,"level":"DEFAULT","message":"execve: exec: \"[C:\\\\Program Files\\\\Velociraptor\\\\Tools\\\\tmp3319932095\\\\thor64-lite.exe --json -e C:\\\\Program Files\\\\Velociraptor\\\\Tools\\\\tmp3319932095]\": executable file not found in %PATH%\n"} {"client_time":1731337544,"level":"DEFAULT","message":"Generic.Scanner.ThorZIP/ThorExec: Time 0: Generic.Scanner.ThorZIP/ThorExec: Sending response part 0 3 B (1 rows)."} {"client_time":1731337544,"level":"DEFAULT","message":"read_file: Field filename Expecting a path arg type, not types.Null\n"} {"client_time":1731337544,"level":"DEFAULT","message":"Generic.Scanner.ThorZIP/ThorExec: Time 0: Generic.Scanner.ThorZIP/ThorResultsJson: Sending response part 0 12 B (1 rows)."} {"client_time":1731337544,"level":"INFO","message":"Collection Generic.Scanner.ThorZIP/ThorExec is done after 13.4206963s\n"} {"client_time":1731337544,"level":"DEFAULT","message":"tempfile: removing tempfile C:\\Program Files\\Velociraptor\\Tools\\VQL_Unzip_.jsonl1648725754\n"} {"client_time":1731337544,"level":"DEFAULT","message":"tempfile: removed tempfile C:\\Program Files\\Velociraptor\\Tools\\VQL_Unzip_.jsonl1648725754\n"} {"client_time":1731337544,"level":"DEFAULT","message":"RemoveDirectory: removing tempdir C:\\Program Files\\Velociraptor\\Tools\\tmp3319932095\n"} {"client_time":1731337545,"level":"DEFAULT","message":"RemoveDirectory: removed tempdir C:\\Program Files\\Velociraptor\\Tools\\tmp3319932095\n"} {"client_time":1731337545,"level":"DEBUG","message":"Query Stats: {\"RowsScanned\":3061,\"PluginsCalled\":18,\"FunctionsCalled\":9,\"ProtocolSearch\":459,\"ScopeCopy\":6167}\n"} Could you send us a screenshot of the directory structure inside of that ZIP file? The binary is probably just inside of a sub folder instead of the root folder. The structure looks okay. As you can see the problem is that it doesn't find the executable after the extraction {"client_time":1731337543,"level":"DEFAULT","message":"Adding global destructor for C:\\Program Files\\Velociraptor\\Tools\\tmp3319932095\n"} {"client_time":1731337543,"level":"WARN","message":"Materialize of LET Unzip: Expand larger than 1000 rows, VQL will switch to tempfile backing on C:\\Program Files\\Velociraptor\\Tools\\VQL_Unzip_.jsonl1648725754 which will be much slower.\n"} {"client_time":1731337544,"level":"DEFAULT","message":"execve: Running external command [[C:\\Program Files\\Velociraptor\\Tools\\tmp3319932095\\thor64-lite.exe --json -e C:\\Program Files\\Velociraptor\\Tools\\tmp3319932095] []]\n"} {"client_time":1731337544,"level":"DEFAULT","message":"execve: exec: \"[C:\\\\Program Files\\\\Velociraptor\\\\Tools\\\\tmp3319932095\\\\thor64-lite.exe --json -e C:\\\\Program Files\\\\Velociraptor\\\\Tools\\\\tmp3319932095]\": executable file not found in %PATH%\n"} It could be caused by: AV / EDR almost full hard disk Could you verify the SHA256 hash of the ZIP file before you upload it to AWS? same issue, Le tme know when you've found the reason. I have fixed it by changing it to: LET Exec <= SELECT * FROM execve(argv=[(Executable[0]).F, "--json", "-e", TmpDir])
gharchive/issue
2024-11-11T15:42:38
2025-04-01T06:37:18.246757
{ "authors": [ "Neo23x0", "gibranalvinda", "simonstegard" ], "repo": "NextronSystems/velociraptor-artifacts-thor", "url": "https://github.com/NextronSystems/velociraptor-artifacts-thor/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2459686333
Bootstrap No Longer Available to Download and Unable to Sync from Block 0 The bootstrap is no longer available to download and the protocol is unable to sync from Block 0 without getting stuck on the known block (known issue). @VidereLicet please fix this major issue. We no longer have a way to run a full node making the protocol extremely centralized. I started looking @ IP's @ the Break, and a Hong Kong Chi-Com IP pops out! 47.75.59.40 I blocked that IP and tried downloading the Blockchain again, and this time @ start of Sync it seemed to not throw type of Orphan before the other Hard Crash Block, but it usually always heals from restarting the Nexus Client! I think Hard-code a Geo Block for Hong Kong! Chi-Coms use it as a so called "Democratic" Digital Hub to spread as a Chi-Com Advance Guard! But it's simple to Geo Block all HK as it has it's own Country Code HK, but otherwise I would recommending maybe blocking this Block of IP's! Plus It would be a Revolutionary Act to proactively GEO Block the Chi-Coms who only spread Crime around the World and Drugs using Crypto, there has been nothing positive to ever come out of China in regards to Crypto development! ASIC's will be history as they aren't QUANTUM SECURE~! Binance is owned by Canadian Born Chi-Com who mastered how the Kosher Bankers Manipulate the System and Setup his Persona as the Hallmark of A+ No Fraud Backup Fund Repair Asian Guy Perfectly! There is no real Scenario where Nexus Geo Blocking the Chi-Coms could be a bad thing as they provide nothing but wasteful destruction using their Now Privileged Position in the World brought about by Corrupt Elements within the Earth mainly the Kiken Mossad and Vatican once the Jewish Mafias worked out all the Rat Runs for the Chi-Coms to be their Slave labor Engine, to depose the West Morally and Financially dumping the Western Populations in horror of their Porn Capitals Hundred Billion a year Profit Industry @ same time propping up Chi-Com slave labor and Position in the World! Mostly without a proper Judge upon Earth with enough Power as even now Israel and Russia have both International Arrest Warrants from the UN War Crimes Tribunal but are readily ignored by all the main Actors within each Geo Blocked Sphere the Kosher Bankers have created of the Earth as Sanctions are the Kosher Bankers #1 Tool for Control! 47.75.59.40 47.75.0/255.0/255 On a more Technical Note with Nexus Development As far as I know Nexus Tritium can cut off the Blockchain from that fork in 2020! So the Blockchain would be much smaller and start downloading from 2020 or something similar like that! The main Strategy I think Colin is trying to Solve is using the Old Nexus Bitcoin Code base to Soon have Nexus be able to Import Bitcoin Wallet.dat Files into their Sigchains ect...........and use it as Collateral for NXS/BTC Exchange within the Tritium Nexus GUI But I think Monero is much better long term Strategy for many reasons, most of which I think it has a lot of Value Creation Compared to Bitcoin's 1.5 Trillion Market Cap Versus Monero 5 Billion Market Cap! As for this Blockchain Sync Business I am not too worried about it, I could setup my Computer to use Tor Firewall and get it to work, however I don't trust Tor, and Prefer Wireguard Firewall! I think it will get worked out anyway, the Next Release should be the Hard Fork Release, so it's just a matter of where to place the Hard Fork, but IDK in reality without a lot more discussion! Either way I think Nexus will as Colin said cut the Old Blockchain Bloat on the Old Code base! That is where this Issue is stemming from on Bitcoin Code Nexus Blocks from 2015, Not the New Tritium Code base just to Clarify! So it will get Resolved 1 way or another! Bootstrap is now consistently available.
gharchive/issue
2024-08-11T17:26:59
2025-04-01T06:37:18.315096
{ "authors": [ "NamecoinGithub", "NealHelman", "S4X1" ], "repo": "Nexusoft/LLL-TAO", "url": "https://github.com/Nexusoft/LLL-TAO/issues/170", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2303533120
Very slow loading behind Cloudflare proxy Checklist Have you pulled and found the error with jc21/nginx-proxy-manager:latest docker image? Yes Are you sure you're not using someone else's docker image? Yes Have you searched for similar issues (both open and closed)? Yes Describe the bug Very slow loading times when behind Cloudflare proxy. Not going through Cloudflare proxy works fine. Issue showed up in 2.11.0, as rolling back to 2.10.4 resolves the problem. Nginx Proxy Manager Version 2.11.0, 2.11.1, 2.11.2 Additional context May be related to #3365? As stated on #3365 running this project behind a proxy isn't really its intended purpose. That said, just like any other website, there shouldn't be a problem. Are you proxying to NPM over HTTP or HTTPS?
gharchive/issue
2024-05-17T20:28:10
2025-04-01T06:37:18.321414
{ "authors": [ "jc21", "julianq" ], "repo": "NginxProxyManager/nginx-proxy-manager", "url": "https://github.com/NginxProxyManager/nginx-proxy-manager/issues/3762", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
370578217
Bundling FFmpeg I just tried ScreenToGif for the first time and was a bit surprised to see that exporting to MP4 wasn't natively supported. I realized that it supports FFmpeg output, but it requires it to be installed. I was wondering if there any limitation to bundle FFmpeg as part of ScreenToGif so users don't have to install it separately? (I guess retain the ability to override the FFmpeg install as an advanced option) Also noticed that you do have a button to download FFmpeg in the options, but it prompts the user to specify a download location. Could I suggest if the app is installed, then at least this button by default would save FFmpeg to the installation directory (so it's 1 step less for the user)? You might also want to do what ShareX has integrated with their software: an auto-download feature. That will maintain a small download size and enable video export for the users that need it. Alternatively, you might point to one of a number of available open FFMPEG font-ends for a video export tool that works with the amazing number of features built into that tool. Some possibilities: FFmpegCatapult: https://github.com/mylesthaiss/FFmpegCatapult SmartFFmpeg: http://freeware.satria.de/SmartFFmpeg/index.php?lang=EN WinFF: www.biggmatt.com/winff/ Related thread: #376 Worrying about the size of the program is completely wrong headed. Also, ffmpeg encodes to a mind-boggling number of formats, in various containers, so the "MP4" complaint is wrong. I searched my machine and it turned out that there are 71 instances of ffmpeg.exe used by various programs- players, converters, video, audio and tag editors, downloaders, catalogs, servers, etc. Most of these programs come with absolute, hard-coded paths to ffmpeg.exe and as a result I have not just multiple instances but also different (in most cases- outdated) versions of ffmpeg.exe. Besides a huge waste of space it is also a huge waste of time to update these instances one by one. #563
gharchive/issue
2018-10-16T12:03:50
2025-04-01T06:37:18.365522
{ "authors": [ "NickeManarin", "longzheng", "riverar", "smaragdus", "vatterspun" ], "repo": "NickeManarin/ScreenToGif", "url": "https://github.com/NickeManarin/ScreenToGif/issues/389", "license": "MS-PL", "license_type": "permissive", "license_source": "github-api" }
2747182559
🛑 cedro is down In b602b57, cedro (https://www.cedro.org) was down: HTTP code: 0 Response time: 0 ms Resolved: cedro is back up in f847b4c after 10 minutes.
gharchive/issue
2024-12-18T08:48:12
2025-04-01T06:37:18.375053
{ "authors": [ "NicolasAbihaggle" ], "repo": "NicolasAbihaggle/serviciosestados", "url": "https://github.com/NicolasAbihaggle/serviciosestados/issues/838", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
432989918
Arch version fails to build (includes proposed fix) As of the new version, I'm having issues installing this on Arch linux via the AUR. I did some digging and discovered that the md5sum changed and the location and name of the .desktop file have also been modified. I was able to get it to build with a few modifications: The md5sum of the file actually downloaded was d47e22482755a2fc73061860c09d41ac - this might need updated in the source code. If you put this value into PKGBUILD after cloning the repo and run makepkg, it works: source=('https://shadow.tech/linux/shadow-beta.zip') md5sums=('d47e22482755a2fc73061860c09d41ac') The .desktop file, it is actually named shadow-dev.desktop in the new archive so the PKGBUILD script should be updated to reflect this: ### Edit launcher sed -e 's/^Categories=.*$/Categories=Games;Game;Utility;Virtualization/g' "${pkgdir}/usr/share/applications/shadow-dev.desktop" > shadow-beta.desktop chmod g-w shadow-beta.desktop rm "${pkgdir}/usr/share/applications/shadow-dev.desktop" mv shadow-beta.desktop "${pkgdir}/usr/share/applications/shadow-beta.desktop" Please update this so that others can benefit :) AUR package updated, thanks to you :)
gharchive/issue
2019-04-14T15:11:31
2025-04-01T06:37:18.377115
{ "authors": [ "NicolasGuilloux", "dylanmtaylor" ], "repo": "NicolasGuilloux/blade-shadow-beta", "url": "https://github.com/NicolasGuilloux/blade-shadow-beta/issues/31", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
148440725
The "Access the webcam" test is outdated navigator.getUserMedia is deprecated and was replaced by navigator.mediaDevices.getUserMedia Fixed
gharchive/issue
2016-04-14T18:12:10
2025-04-01T06:37:18.381856
{ "authors": [ "LMLB", "NielsLeenheer" ], "repo": "NielsLeenheer/html5test", "url": "https://github.com/NielsLeenheer/html5test/issues/432", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
424199580
How to disable styles? Hi, I want to disable CSS styles plugin, I do not see a way to do it? What is CSS styles plugin? What is CSS styles plugin? vue-slider-component , can I disable styles? What do you want to do, whether it is a style with the slider disabled or not using the default style of the slider? not using the default style of the slider https://yadi.sk/i/p0XVuOLLauGBAg I use my own styles and I don't need to load slider styles doing the double work of overriding css Version 2.0 is not easy to rewrite styles, you can upgrade to version 3.0. You can refer https://nightcatsama.github.io/vue-slider-component/#/style Version 2.0 is not easy to rewrite styles, you can upgrade to version 3.0. You can refer https://nightcatsama.github.io/vue-slider-component/#/style Yes version 3 solved my problem, thanks
gharchive/issue
2019-03-22T13:04:36
2025-04-01T06:37:18.413889
{ "authors": [ "NightCatSama", "Sergey-Ssn" ], "repo": "NightCatSama/vue-slider-component", "url": "https://github.com/NightCatSama/vue-slider-component/issues/326", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2636608888
[Feature] – Return transitive dependencies list in alphabetical order It would be nice if we had this feature to have consistency in the results. Yes, we can sort the transitive dependencies alphabetically for same error reporting each time 👍 I am more than welcome if you want to contribute to this repo by creating a PR Yes, we can sort the transitive dependencies alphabetically for same error reporting each time 👍 I am more than welcome @douglasgondim if you want to contribute to this repo by creating a PR
gharchive/issue
2024-11-05T22:35:25
2025-04-01T06:37:18.431448
{ "authors": [ "Nikoloutsos", "douglasgondim", "kostasniks" ], "repo": "Nikoloutsos/explicitDependencyImportCheck", "url": "https://github.com/Nikoloutsos/explicitDependencyImportCheck/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2248679826
Replace PRIMITIVE_COLORS with COLORS Replacement of PRIMITIVE_COLORS variable with COLORS to make it shorter and more convenient to use. Name primitive colors makes no since we have no "non-primitive" colors. It's breaking changes It should have 3.0 version because it's breaking changes or you should provide backward compatibility Agree, thanks.
gharchive/pull-request
2024-04-17T16:08:50
2025-04-01T06:37:18.434296
{ "authors": [ "KlonD90", "ukorvl" ], "repo": "NilFoundation/ui-kit", "url": "https://github.com/NilFoundation/ui-kit/pull/278", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
109833121
can't build rss2irc from haskellPackages nix-env -iA nixos.haskellPackages.rss2irc output: Preprocessing executable 'rss2irc' for rss2irc-1.0.6... Feed.hs:24:8: Could not find module ‘Network.URI’ It is a member of the hidden package ‘network-uri-2.6.0.3@netwo_8g2FaihlDOW7yH7DhmRvHt’. Perhaps you need to add ‘network-uri’ to the build-depends in your .cabal file. Use -v to see a list of the files searched for. builder for ‘/nix/store/mw7ai3bngizdcasgajqsdmi22nmyk49s-rss2irc-1.0.6.drv’ failed with exit code 1 It's probably best to report this issue upstream. The package is quite outdated (http://packdeps.haskellers.com/feed?needle=rss2irc) and needs some attention before recent compilers can build it.
gharchive/issue
2015-10-05T16:12:45
2025-04-01T06:37:18.503730
{ "authors": [ "Lassulus", "peti" ], "repo": "NixOS/nixpkgs", "url": "https://github.com/NixOS/nixpkgs/issues/10240", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
821271552
gimp plugin wavelet-sharpen replaces gimp on upgrade Describe the bug Using the current unstable channel, if install gimp in my user environment and then do an upgrade, Nix replaces gimp with the wavelet-sharpen plugin and effectively uninstalls gimp itself from the user environment. This does not happen in the stable branch 20.09. To Reproduce Here is a transcript of what happens: $ which gimp which: no gimp in (/run/wrappers/bin:/home/manu/.nix-profile/bin:/etc/profiles/per-user/manu/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin) $ nix-env -iA nixpkgs.gimp installing 'gimp-2.10.22' $ which gimp /home/manu/.nix-profile/bin/gimp $ nix-env -q | grep gimp gimp-2.10.22 $ nix-env --upgrade upgrading 'gimp-2.10.22' to 'gimp-2.10.22-plugin-wavelet-sharpen-0.1.2' $ which gimp which: no gimp in (/run/wrappers/bin:/home/manu/.nix-profile/bin:/etc/profiles/per-user/manu/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin) $ nix-env -q | grep gimp gimp-2.10.22-plugin-wavelet-sharpen-0.1.2 $ nix-env --uninstall gimp uninstalling 'gimp-2.10.22-plugin-wavelet-sharpen-0.1.2' Expected behavior I expect gimp to be upgraded. Apparently, the fact that that gimp-2.10.22-plugin-wavelet-sharpen-0.1.2 contains numbers in the middle is the source of the problem, as if 2.10.22-plugin-wavelet-sharpen-0.1.2 was a version number, and higher than 2.10.22. Notify maintainers The gimp package is maintained by @jtojnar, I don't know about the wavelet-sharpen plugin. Metadata system: "x86_64-linux" host os: Linux 5.4.99, NixOS, 20.09.3246.d4189f68fdb (Nightingale) multi-user?: yes sandbox: yes version: nix-env (Nix) 2.3.10 channels(manu): "nixpkgs-21.05pre273332.5df05c902cd" channels(root): "nixos-20.09.3343.8d82c865b41" nixpkgs: /home/manu/.nix-defexpr/channels/nixpkgs Maintainer information: # a list of nixpkgs attributes affected by the problem attribute: gimp # a list of nixos modules affected by the problem module: Ugh, I can’t wait for nix-env to be abandoned. This is a regression caused by https://github.com/NixOS/nixpkgs/commit/ef5475235ce4f3d1405dca5653cbe3599f0ff637 cc @erictapen On NixOS you probably shouldn't use nix-env at all (use configuration.nix or home-manager instead) and documentation should not encourage to use it imo: https://github.com/NixOS/nixos-homepage/issues/672 I'm all for using the declarative approach to handle user packages, and actually my current setup consists mostly in defining an environment with buildEnv in ~/.config/nixpkgs/config.nix with the packages I want. However I find the stateful system of nix-env to be a useful middle-ground between full declarative and using nix-shell for short-term usage of specific packages. As a side note, Gimp ended up installed through nix-env instead of config*.nix because of space constraints. On a system with a few gigabytes of storage allocated for /nix/store, upgrading a whole system with several big packages (e.g. Gimp, TeXlive, Haskell and other things) becomes a problem because you must have two versions of each present at the same time before you can switch to the new generation and garbage-collect the old one. At least, with nix-env, you can uninstall those, collect garbage, upgrade and then install them back without having to hack on your declarative configuration. As for Home Manager, I admit I am reluctant to make the switch from my years-old tuned and portable home configuration as long as the first words of its documentation are a big warning on the fact that it may break things. Ugh, I can’t wait for nix-env to be abandoned. To be fair, we would need to also get rid of this behaviour: nix-repl> builtins.parseDrvName "gimp-2.10.22-plugin-wavelet-sharpen-0.1.2" { name = "gimp"; version = "2.10.22-plugin-wavelet-sharpen-0.1.2"; } Yes, I would kill parseDrvName too, if I could.
gharchive/issue
2021-03-03T16:29:09
2025-04-01T06:37:18.514244
{ "authors": [ "jtojnar", "symphorien", "veprbl", "xunam" ], "repo": "NixOS/nixpkgs", "url": "https://github.com/NixOS/nixpkgs/issues/114995", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
989750919
Q: How can i write an output of a command to file using nixlang? I want to create a file with the content from stdout of command openssl rand -hex 64 to be provided to services.discourse.secretKeyBaseFile How can i do that in a reproducible way assuming that the current way is do a build with services.discourse.secretKeyBaseFile = "/run/keys/secret_key_base" and after build do manually openssl rand -hex 64 > /run/keys/secret_key_base && chown discourse:discourse /run/keys/secret_key_base && chmod 440 /run/keys/secret_key_base ? If you use nixlang I think the secret will end up in the nix store (not so secret). I would write a systemd service for your openssl commands that creates the /run/keys/secret_key_base file at runtime, and makes the discourse service wait for that file. @bjornfor I rather want to handle it through nixlang where the theory is: services.discourse.secretKeyBaseFile = pkgs.writeFile "/run/keys/secret_key_base" '' ${generateTheString} ''; Where i don't know how could i do the ${generateTheString} Probably something like reading /dev/random ? [kreyren@leonid:~]$ head -n1 /dev/random :#0��Ǟ0�PXn�����t�����gn�p(o�V�+�>g�<̻�Jсo�ԟ�ڀ4 {�I��_f���<�{ runCommand and friends can write data to the nix store, from a shell script. See here: https://nixos.org/manual/nixpkgs/stable/#trivial-builder-runCommand @legendofmiracles I want that outside of nix store, because it's a secret.. Which i assume is not possible? You could use nix-instantiate --eval --strict to get Nix values without necessarily writing them to the store. It's what NixOps uses for its keys (secrets) feature. It even has a resources to generate an ssh key and store it in the NixOps state, or do the same for an arbitrary command on the deployer's machine. Sadly, we don't have generated docs for NixOps 2 at the moment, but you can find the option descriptions for these resources here: https://github.com/NixOS/nixops/tree/master/nix
gharchive/issue
2021-09-07T08:53:31
2025-04-01T06:37:18.519851
{ "authors": [ "Kreyren", "bjornfor", "legendofmiracles", "roberth" ], "repo": "NixOS/nixpkgs", "url": "https://github.com/NixOS/nixpkgs/issues/136972", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1184699524
Packaging request: normcap Project description OCR-powered screenshot tool to capture text instead of images. Metadata homepage URL: https://dynobo.github.io/normcap/ source URL: https://github.com/dynobo/normcap license: GPL3 platforms: unix example package: AUR (PKGBUILD) I started working on this but I'm stuck at how to package with qt This seems to be a poetry-based project, meaning you should set format="pyproject" and add poetry-core as a nativeBuildInput. If adding the python qt libraries is not sufficient, then try using the hooks for Qt applications. https://github.com/nix-community/nur-combined/blob/6bddae47680482383b5769dd3aa7d82b88e6cbc8/repos/renesat/pkgs/normcap/default.nix#L42 I'll try to make a PR with this, ty Is it possible that normcap causes issues in recent NixOS? error: builder for '/nix/store/zk4imnx8zyibcghr5lriq31rwsyzcpvn-normcap-0.5.4.drv' failed with exit code 1; last 10 log lines: > =========================== short test summary info ============================ > FAILED tests/test_app.py::test_get_application - RuntimeError: Please destroy the QApplication singleton before creating a n... > FAILED tests/tests_gui/test_loading_indicator.py::test_radius - AttributeError: 'LoadingIndicator' object has no attribute 'timer' > FAILED tests/tests_gui/test_loading_indicator.py::test_opacities - AttributeError: 'LoadingIndicator' object has no attribute 'timer' > FAILED tests/tests_gui/test_loading_indicator.py::test_show_starts_timer - AttributeError: 'LoadingIndicator' object has no attribute 'timer' > FAILED tests/tests_gui/test_loading_indicator.py::test_hide_stops_timer - AttributeError: 'LoadingIndicator' object has no attribute 'timer' > FAILED tests/tests_gui/test_loading_indicator.py::test_frame_count_progresses - AttributeError: 'LoadingIndicator' object has no attribute 'timer' > FAILED tests/tests_gui/test_loading_indicator.py::test_circles_are_rendered - AttributeError: 'LoadingIndicator' object has no attribute 'timer' > ==== 7 failed, 297 passed, 15 skipped, 21 deselected, 13 warnings in 5.69s ===== > /nix/store/9wnvhjyxjykwn5y06xc9a2h8rs5fbfia-stdenv-linux/setup: line 1579: pop_var_context: head of shell_variables not a function context For full logs, run 'nix log /nix/store/zk4imnx8zyibcghr5lriq31rwsyzcpvn-normcap-0.5.4.drv'. error: 1 dependencies of derivation '/nix/store/2pvf5yvj6qi68k6hzlp6s3pjab7m09m5-home-manager-path.drv' failed to build error: 1 dependencies of derivation '/nix/store/mja27h5nbrcn4m82rb2mc972sxr625bp-home-manager-generation.drv' failed to build error: 1 dependencies of derivation '/nix/store/2wxil3bbrlk02vkg8qvwgr56q7i936q7-user-environment.drv' failed to build error: 1 dependencies of derivation '/nix/store/wib7xmij8pai67c6mvdvcplzqkl797n5-etc.drv' failed to build error: 1 dependencies of derivation '/nix/store/lbcml8mj88hiv0s2glv86ks2gm5lbr29-nixos-system-nixos-24.05.20240421.6143fc5.drv' failed to build > nix-shell -p nix-info --run "nix-info -m" - system: `"x86_64-linux"` - host os: `Linux 6.6.21, NixOS, 24.05 (Uakari), 24.05.20240312.0ad13a6` - multi-user?: `yes` - sandbox: `yes` - version: `nix-env (Nix) 2.18.1` - channels(root): `"nixos-23.11, unstable"` - nixpkgs: `/nix/store/k5l01g2zwhysjyl5zjvg5zxnj0lyxpp1-source` Fixed by https://github.com/NixOS/nixpkgs/pull/305822 which reaches unstable in a few fays
gharchive/issue
2022-03-29T11:12:26
2025-04-01T06:37:18.526590
{ "authors": [ "I-Want-ToBelieve", "cafkafk", "koppor", "pbsds" ], "repo": "NixOS/nixpkgs", "url": "https://github.com/NixOS/nixpkgs/issues/166228", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1429948521
paperless-ngx pdf import not working Describe the bug Using unstable paperless-ngx-1.9.2 (on a 22.05 system) I am no longer able to import documents from the web interface as they fail with a MissingDependencyError in unpaper. Sadly parts of the error log appear to be missing, I am not sure how to get the full output. Steps To Reproduce Steps to reproduce the behavior: Configure service.paperless with package = pkgs.unstable.paperless-ngx Import document from the web interface Expected behavior Document should be processed and show up in the document list Log Oct 31 15:24:42 nixos paperless-ngx[38670]: [2022-10-31 14:24:42,168] [ERROR] [paperless.consumer] Error while consuming document censored-document-name.pdf: MissingDependencyError: Ran program '/nix/store/s9qywrdw15ihkxw5fwjya47551fs58vh-unpaper-7.0.0/bin/unpaper' but it exited with an error: Oct 31 15:24:42 nixos paperless-ngx[38670]: Traceback (most recent call last): Oct 31 15:24:42 nixos paperless-ngx[38670]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/subprocess/__init__.py", line 157, in get_version Oct 31 15:24:42 nixos paperless-ngx[38670]: proc = run( Oct 31 15:24:42 nixos paperless-ngx[38670]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/subprocess/__init__.py", line 58, in run Oct 31 15:24:42 nixos paperless-ngx[38670]: proc = subprocess_run(args, env=env, check=check, **kwargs) Oct 31 15:24:42 nixos paperless-ngx[38670]: File "/nix/store/fkcl1wzq3106qqgl84bhgk1lp56q6bzg-python3-3.10.7/lib/python3.10/subprocess.py", line 524, in run Oct 31 15:24:42 nixos paperless-ngx[38670]: raise CalledProcessError(retcode, process.args, Oct 31 15:24:42 nixos paperless-ngx[38670]: subprocess.CalledProcessError: Command '['/nix/store/s9qywrdw15ihkxw5fwjya47551fs58vh-unpaper-7.0.0/bin/unpaper', '--version']' died with <Signals.SIGSYS: 31>. Oct 31 15:24:42 nixos paperless-ngx[38670]: The above exception was the direct cause of the following exception: Oct 31 15:24:42 nixos paperless-ngx[38670]: Traceback (most recent call last): Oct 31 15:24:42 nixos paperless-ngx[38670]: File "/nix/store/za5yxgmxgmf8rdlhni9xg3w7mcwnf9bm-paperless-ngx-1.9.2/lib/paperless-ngx/src/paperless_tesseract/parsers.py", line 277, in parse Oct 31 15:24:42 nixos paperless-ngx[38670]: ocrmypdf.ocr(**args) Oct 31 15:24:42 nixos paperless-ngx[38670]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/api.py", line 339, in ocr Oct 31 15:24:42 nixos paperless-ngx[38670]: check_options(options, plugin_manager) Oct 31 15:24:42 nixos paperless-ngx[38670]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/_validation.py", line 245, in check_options Oct 31 15:24:42 nixos paperless-ngx[38670]: _check_plugin_invariant_options(options) Oct 31 15:24:42 nixos paperless-ngx[38670]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/_validation.py", line 232, in _check_plugin_invariant_options Oct 31 15:24:42 nixos paperless-ngx[38670]: check_options_preprocessing(options) Oct 31 15:24:42 nixos paperless-ngx[38670]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/_validation.py", line 132, in check_options_preprocessing Oct 31 15:24:42 nixos paperless-ngx[38670]: check_external_program( Oct 31 15:24:42 nixos paperless-ngx[38670]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/subprocess/__init__.py", line 331, in check_external_program Oct 31 15:24:42 nixos paperless-ngx[38670]: found_version = version_checker() Oct 31 15:24:42 nixos paperless-ngx[38670]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/_exec/unpaper.py", line 69, in version Oct 31 15:24:42 nixos paperless-ngx[38670]: return get_version('/nix/store/s9qywrdw15ihkxw5fwjya47551fs58vh-unpaper-7.0.0/bin/unpaper') Oct 31 15:24:42 nixos paperless-ngx[38670]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/subprocess/__init__.py", line 173, in get_version Oct 31 15:24:42 nixos paperless-ngx[38670]: raise MissingDependencyError( Oct 31 15:24:42 nixos paperless-ngx[38670]: ocrmypdf.exceptions.MissingDependencyError: Ran program '/nix/store/s9qywrdw15ihkxw5fwjya47551fs58vh-unpaper-7.0.0/bin/unpaper' but it exited with an error: Oct 31 15:24:42 nixos paperless-ngx[38670]: The above exception was the direct cause of the following exception: Oct 31 15:24:42 nixos paperless-ngx[38670]: Traceback (most recent call last): Oct 31 15:24:42 nixos paperless-ngx[38670]: File "/nix/store/za5yxgmxgmf8rdlhni9xg3w7mcwnf9bm-paperless-ngx-1.9.2/lib/paperless-ngx/src/documents/consumer.py", line 320, in try_consume_file Oct 31 15:24:42 nixos paperless-ngx[38670]: document_parser.parse(self.path, mime_type, self.filename) Oct 31 15:24:42 nixos paperless-ngx[38670]: File "/nix/store/za5yxgmxgmf8rdlhni9xg3w7mcwnf9bm-paperless-ngx-1.9.2/lib/paperless-ngx/src/paperless_tesseract/parsers.py", line 333, in parse Oct 31 15:24:42 nixos paperless-ngx[38670]: raise ParseError(f"{e.__class__.__name__}: {str(e)}") from e Oct 31 15:24:42 nixos paperless-ngx[38670]: documents.parsers.ParseError: MissingDependencyError: Ran program '/nix/store/s9qywrdw15ihkxw5fwjya47551fs58vh-unpaper-7.0.0/bin/unpaper' but it exited with an error: Oct 31 15:24:42 nixos paperless-ngx[38670]: 14:24:42 [Q] INFO Process-1:4 stopped doing work Oct 31 15:24:42 nixos paperless-ngx[38596]: 14:24:42 [Q] ERROR Failed [censored-document-name.pdf] - censored-document-name.pdf: Error while consuming document censored-document-name.pdf: MissingDependencyError: Ran program '/nix/store/s9qywrdw15ihkxw5fwjya47551fs58vh-unpaper-7.0.0/bin/unpaper' but it exited with an error: Oct 31 15:24:42 nixos paperless-ngx[38596]: : Traceback (most recent call last): Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/subprocess/__init__.py", line 157, in get_version Oct 31 15:24:42 nixos paperless-ngx[38596]: proc = run( Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/subprocess/__init__.py", line 58, in run Oct 31 15:24:42 nixos paperless-ngx[38596]: proc = subprocess_run(args, env=env, check=check, **kwargs) Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/fkcl1wzq3106qqgl84bhgk1lp56q6bzg-python3-3.10.7/lib/python3.10/subprocess.py", line 524, in run Oct 31 15:24:42 nixos paperless-ngx[38596]: raise CalledProcessError(retcode, process.args, Oct 31 15:24:42 nixos paperless-ngx[38596]: subprocess.CalledProcessError: Command '['/nix/store/s9qywrdw15ihkxw5fwjya47551fs58vh-unpaper-7.0.0/bin/unpaper', '--version']' died with <Signals.SIGSYS: 31>. Oct 31 15:24:42 nixos paperless-ngx[38596]: The above exception was the direct cause of the following exception: Oct 31 15:24:42 nixos paperless-ngx[38596]: Traceback (most recent call last): Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/za5yxgmxgmf8rdlhni9xg3w7mcwnf9bm-paperless-ngx-1.9.2/lib/paperless-ngx/src/paperless_tesseract/parsers.py", line 277, in parse Oct 31 15:24:42 nixos paperless-ngx[38596]: ocrmypdf.ocr(**args) Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/api.py", line 339, in ocr Oct 31 15:24:42 nixos paperless-ngx[38596]: check_options(options, plugin_manager) Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/_validation.py", line 245, in check_options Oct 31 15:24:42 nixos paperless-ngx[38596]: _check_plugin_invariant_options(options) Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/_validation.py", line 232, in _check_plugin_invariant_options Oct 31 15:24:42 nixos paperless-ngx[38596]: check_options_preprocessing(options) Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/_validation.py", line 132, in check_options_preprocessing Oct 31 15:24:42 nixos paperless-ngx[38596]: check_external_program( Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/subprocess/__init__.py", line 331, in check_external_program Oct 31 15:24:42 nixos paperless-ngx[38596]: found_version = version_checker() Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/_exec/unpaper.py", line 69, in version Oct 31 15:24:42 nixos paperless-ngx[38596]: return get_version('/nix/store/s9qywrdw15ihkxw5fwjya47551fs58vh-unpaper-7.0.0/bin/unpaper') Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/dk6m8qsyichiyh4b3v5d5b7kqjjnswgd-python3.10-ocrmypdf-13.7.0/lib/python3.10/site-packages/ocrmypdf/subprocess/__init__.py", line 173, in get_version Oct 31 15:24:42 nixos paperless-ngx[38596]: raise MissingDependencyError( Oct 31 15:24:42 nixos paperless-ngx[38596]: ocrmypdf.exceptions.MissingDependencyError: Ran program '/nix/store/s9qywrdw15ihkxw5fwjya47551fs58vh-unpaper-7.0.0/bin/unpaper' but it exited with an error: Oct 31 15:24:42 nixos paperless-ngx[38596]: The above exception was the direct cause of the following exception: Oct 31 15:24:42 nixos paperless-ngx[38596]: Traceback (most recent call last): Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/28535xrr4xv02r0xlq6b5k9gqy96d9cq-python3.10-asgiref-3.5.2/lib/python3.10/site-packages/asgiref/sync.py", line 280, in main_wrap Oct 31 15:24:42 nixos paperless-ngx[38596]: raise exc_info[1] Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/za5yxgmxgmf8rdlhni9xg3w7mcwnf9bm-paperless-ngx-1.9.2/lib/paperless-ngx/src/documents/consumer.py", line 320, in try_consume_file Oct 31 15:24:42 nixos paperless-ngx[38596]: document_parser.parse(self.path, mime_type, self.filename) Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/za5yxgmxgmf8rdlhni9xg3w7mcwnf9bm-paperless-ngx-1.9.2/lib/paperless-ngx/src/paperless_tesseract/parsers.py", line 333, in parse Oct 31 15:24:42 nixos paperless-ngx[38596]: raise ParseError(f"{e.__class__.__name__}: {str(e)}") from e Oct 31 15:24:42 nixos paperless-ngx[38596]: documents.parsers.ParseError: MissingDependencyError: Ran program '/nix/store/s9qywrdw15ihkxw5fwjya47551fs58vh-unpaper-7.0.0/bin/unpaper' but it exited with an error: Oct 31 15:24:42 nixos paperless-ngx[38596]: The above exception was the direct cause of the following exception: Oct 31 15:24:42 nixos paperless-ngx[38596]: Traceback (most recent call last): Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/0lg74rgck6skkf4p14ivqsk1nb60d5ck-python3.10-django-q-1.3.9/lib/python3.10/site-packages/django_q/cluster.py", line 454, in worker Oct 31 15:24:42 nixos paperless-ngx[38596]: res = f(*task["args"], **task["kwargs"]) Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/za5yxgmxgmf8rdlhni9xg3w7mcwnf9bm-paperless-ngx-1.9.2/lib/paperless-ngx/src/documents/tasks.py", line 154, in consume_file Oct 31 15:24:42 nixos paperless-ngx[38596]: document = Consumer().try_consume_file( Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/za5yxgmxgmf8rdlhni9xg3w7mcwnf9bm-paperless-ngx-1.9.2/lib/paperless-ngx/src/documents/consumer.py", line 339, in try_consume_file Oct 31 15:24:42 nixos paperless-ngx[38596]: self._fail( Oct 31 15:24:42 nixos paperless-ngx[38596]: File "/nix/store/za5yxgmxgmf8rdlhni9xg3w7mcwnf9bm-paperless-ngx-1.9.2/lib/paperless-ngx/src/documents/consumer.py", line 90, in _fail Oct 31 15:24:42 nixos paperless-ngx[38596]: raise ConsumerError(f"{self.filename}: {log_message or message}") from exception Oct 31 15:24:42 nixos paperless-ngx[38596]: documents.consumer.ConsumerError: censored-document-name.pdf: Error while consuming document censored-document-name.pdf: MissingDependencyError: Ran program '/nix/store/s9qywrdw15ihkxw5fwjya47551fs58vh-unpaper-7.0.0/bin/unpaper' but it exited with an error: Oct 31 15:24:43 nixos paperless-ngx[38594]: 14:24:43 [Q] INFO recycled worker Process-1:4 Oct 31 15:24:43 nixos paperless-ngx[38685]: 14:24:43 [Q] INFO Process-1:5 ready for work at 38685 Notify maintainers @lukegb, @gador, @erikarvstedt Metadata [user@system:~]$ nix-shell -p nix-info --run "nix-info -m" - system: `"x86_64-linux"` - host os: `Linux 5.15.74, NixOS, 22.05 (Quokka), 22.05.3900.26eb67abc9a` - multi-user?: `yes` - sandbox: `yes` - version: `nix-env (Nix) 2.8.1` - channels(hendrik): `""` - channels(root): `"nixos-22.05, nixos-hardware, nixpkgs-unstable"` - nixpkgs: `/nix/var/nix/profiles/per-user/root/channels/nixos` I just tested it on the current master as well as nixos-unstable. Both work just fine. I suspect it has something to do with you runnung 22.05 and using a package from the unstable branch. I've seen weird errors when mixing those (e.g. with pgadmin here). @hvraven is this still an issue? Swapping out the executable with one from a different release is not supported.
gharchive/issue
2022-10-31T14:33:00
2025-04-01T06:37:18.535045
{ "authors": [ "Atemu", "gador", "hvraven" ], "repo": "NixOS/nixpkgs", "url": "https://github.com/NixOS/nixpkgs/issues/198787", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
194755925
Cjdns can't create keys in /etc Issue description With cjdns enabled, the service complains that it can't create the necessary keys in /etc /nix/store/9ms47qxfd0qrxn4s689m9pbdrds57d59-unit-script/bin/cjdns-pre-start: line 9: /etc/cjdns.keys: Read-only file system Steps to reproduce services.cjdns.enable = true; Technical details System: 17.03pre96925.1c50bdd (Gorilla) Nix version: 1.11.4 Nixpkgs version: 17.03pre96925.1c50bdd This seems to be the relevant part in nixpkgs/nixos/modules/services/networking/cjdns.nix preStart = if cfg.confFile != null then "" else '' [ -e /etc/cjdns.keys ] && source /etc/cjdns.keys if [ -z "$CJDNS_PRIVATE_KEY" ]; then shopt -s lastpipe ${pkg}/bin/makekeys | { read private ipv6 public; } umask 0077 echo "CJDNS_PRIVATE_KEY=$private" >> /etc/cjdns.keys echo -e "CJDNS_IPV6=$ipv6\nCJDNS_PUBLIC_KEY=$public" > /etc/cjdns.public chmod 600 /etc/cjdns.keys chmod 444 /etc/cjdns.public fi if [ -z "$CJDNS_ADMIN_PASSWORD" ]; then echo "CJDNS_ADMIN_PASSWORD=$(tr -dc A-Za-z0-9 </dev/urandom | head -c 96)" \ >> /etc/cjdns.keys fi ''; What would be the Nixos way here? Yes, it's because the "hardening" is too zealous, it needs to let preStart run as root (or not run with ProtectSystem = full or whatever) @unlmtd can you try something like services.cjdns.serviceConfig.PermissionsStartOnly = true? nixos-rebuild says that option doesnt exist Agh, sorry, that should be systemd.services.cjdns.serviceConfig That didn't do it, Im getting the same error. Hm, I guess it doesn't pertain to fs isolation options; then you need to change ProtectSystem = "full" to true (or false). ProtectSystem = "full" prevents writing the keys (/etc is read-only). pre-start should be refactored to a separate systemd service without that restriction, and cjdns run with that restriction in another service. This should be fixed by a0338afe5faa9f9e403e2caa52e4a8b60c272be9. A better fix is at https://github.com/joachifm/nixpkgs/tree/cjdns-init-keys but is not very well tested.
gharchive/issue
2016-12-10T09:31:04
2025-04-01T06:37:18.541937
{ "authors": [ "joachifm", "tohl", "unlmtd" ], "repo": "NixOS/nixpkgs", "url": "https://github.com/NixOS/nixpkgs/issues/21038", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
230982325
With mlocate normal users cannot access /var/cache Issue description After switching from the default locate package to mlocate, normal users cannot access /var/cache anymore: $ locate vertical-dark-opaque.tint2rc locate: can not open `/var/cache/locatedb': Permission denied $ ls -l /var | grep cache drwxr-x--- 4 root mlocate 4096 May 21 12:01 cache My configuration.nix has the following: services.locate = { enable = true; prunePaths = [ "/tmp" "/var/cache" "/var/lock" "/var/run" "/var/spool" ]; interval = "hourly"; locate = pkgs.mlocate; }; Technical details System: 17.09pre107265.0afb6d789c (Hummingbird) Nix version: nix-env (Nix) 1.11.9 Nixpkgs version: "17.09pre107265.0afb6d789c" Sandboxing enabled: build-use-sandbox = false cc @pngwjpgh @andyjscott More information: # systemctl status update-locatedb ● update-locatedb.service - Update Locate Database Loaded: loaded (/nix/store/hvppx691rlm50drqmy3bbxgbihqacd20-unit-update-locatedb.service/update-locatedb.service; linked; vendor preset: enabled) Active: failed (Result: exit-code) since Wed 2017-05-24 06:00:01 BRT; 57min ago Process: 13268 ExecStart=/nix/store/mhwn1prj1vbn9lqqq2rxin2b8761y569-unit-script/bin/update-locatedb-start (code=exited, status=1/FAILURE) Main PID: 13268 (code=exited, status=1/FAILURE) May 24 06:00:00 jrm.no-ip.org systemd[1]: Started Update Locate Database. May 24 06:00:01 jrm.no-ip.org update-locatedb-start[13268]: /nix/store/j96vg63fm5v7kxfg308q99jasx40f05h-mlocate-0.26/bin/updatedb: unrecognized option '--localuser=nobody' May 24 06:00:01 jrm.no-ip.org systemd[1]: update-locatedb.service: Main process exited, code=exited, status=1/FAILURE May 24 06:00:01 jrm.no-ip.org systemd[1]: update-locatedb.service: Unit entered failed state. May 24 06:00:01 jrm.no-ip.org systemd[1]: update-locatedb.service: Failed with result 'exit-code'. This is caused by install -m ${if isMLocate then "0750" else "0755"} -o root -g ${if isMLocate then "mlocate" else "root"} -d $(dirname ${cfg.output}) This line should be replaced by mkdir -m 0755 -p /var/cache and the cfg.output option should be removed since there is really no reason to make that path configurable. It looks like localuser = null; should also be set for mlocate due to the failure message: ... mlocate-0.26/bin/updatedb: unrecognized option '--localuser=nobody' Currently a warning is shown but it should probably be an error as it breaks the update service.
gharchive/issue
2017-05-24T09:50:15
2025-04-01T06:37:18.547634
{ "authors": [ "andyjscott", "edolstra", "romildo" ], "repo": "NixOS/nixpkgs", "url": "https://github.com/NixOS/nixpkgs/issues/26052", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2267857644
Package request: ondsel Project description Metadata homepage URL: https://ondsel.com/ source URL: license: mit, bsd, gpl2+ , ... platforms: unix, linux, darwin, ... Add a :+1: reaction to issues you find important. duplicate of #284524
gharchive/issue
2024-04-28T22:43:06
2025-04-01T06:37:18.550769
{ "authors": [ "eclairevoyant", "mwalid207" ], "repo": "NixOS/nixpkgs", "url": "https://github.com/NixOS/nixpkgs/issues/307574", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }