id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
198453988
Feature request - Support different env configs Read different .env configs according to current command (start / test / build). Read .env.dev when npm start and npm test Read .env.prod when npm run build By default (if custom config does not exist) read env variables from .env file. dotenv is using for config parsing. Not sure about npm test - what config file should be accepted. But according to dotenv FAQ Should I commit my .env file? No. We strongly recommend against committing your .env file to version control. It should only include environment-specific values such as database passwords or API keys. Your production database should have a different password than your development database. Should I have multiple .env files? No. We strongly recommend against having a "main" .env file and an "environment" .env file like .env.test. Your config should vary between deploys, and you should not be sharing values between environments. Provide simple PR with feature implementation 👍 Nice improvement. It would be very useful! How can i use the variables which from .env file in my ES6 code? @dioxide Adding Custom Environment Variables These environment variables will be defined for you on process.env. For example, having an environment variable named REACT_APP_SECRET_CODE will be exposed in your JS as process.env.REACT_APP_SECRET_CODE, in addition to process.env.NODE_ENV. Update issues according to latest PR updates What .env* files are used? .env - Default .env.development, .env.test, .env.production - Environment-specific settings. .env.local - Local overrides. This file is loaded for all environments except test. .env.development.local, .env.test.local, .env.production.local - Local overrides of environment-specific settings. Files priority (file is skipped if does not exist): npm test - .env.test.local, env.test, .env.local, .env npm run build - .env.production.local, env.production, .env.local, .env npm start - .env.development.local, env.development, .env.local, .env Priority from left to right. Can you confirm that once this feature is built-in create-react-app we won't need https://www.npmjs.com/package/react-app-env anymore ? Fixed in https://github.com/facebookincubator/create-react-app/issues/1344. Will be out in next release. @cadichris Yes. I will mark that package as deprecated Please help beta test the new version that includes this change! https://github.com/facebookincubator/create-react-app/issues/2172
gharchive/issue
2017-01-03T11:17:16
2025-04-01T04:34:13.738558
{ "authors": [ "bozheville", "cadichris", "dioxide", "dmaslov", "gaearon", "tuchk4" ], "repo": "facebookincubator/create-react-app", "url": "https://github.com/facebookincubator/create-react-app/issues/1343", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
210222563
Error in R It looks like this: "Error in compileCode(f, code, language = language, verbose = verbose) : Compilation ERROR, function(s)/method(s) not created! Warning message: running command 'make -f "C:/PROGRA~1/R/R-33~1.2/etc/i386/Makeconf" -f "C:/PROGRA~1/R/R-33~1.2/share/make/winshlib.mk" SHLIB_LDFLAGS='$(SHLIB_CXXLDFLAGS)' SHLIB_LD='$(SHLIB_CXXLD)' SHLIB="file2f455284a4.dll" OBJECTS="file2f455284a4.o"' had status 127 " Thanks for the report! Can you let me know what version of Windows and R you're using. And also verify that you've installed RTools? I use R version 3.3.2 (2016-10-31) and windows 10 pro 32bit. I installed RTools and I created prophet model. Now I have another error -"Error in UseMethod("predict") : no applicable method for 'predict' applied to an object of class "character"". My code : df0=data.frame(ds=(seq.Date(make_date(2017,01,01),make_date(2017,02,01),by = 1)),y = c(1:32)) pr0 =prophet(df0) make_future_dataframe(pr0,periods = 10) future0=make_future_dataframe(pr0,periods = 10) forecast0 <- predict(pr0, future0) Hmm this does not replicate for me, can you give me the output of > class(prr0) > pr0 Just to see what class was returned? I think it is a model of prophet. class(pr0) returned: "list" "prophet" And pr0 returned list of list ($growth, $changepoints and etc.) Does forecast0 <- prophet:::predict.prophet(pr0, future0) work? Yes, it works! thank you @bletham is this an S3 object issue? @seanjtaylor I'm not sure exactly sure what could have happened here. @marishadorosh can you paste the output of library(prophet) methods(predict) And could you try restarting R and see if the issue persists? (a lame suggestion I know). Fix confirmed in #94, but post if you have further issues with this!
gharchive/issue
2017-02-25T09:04:25
2025-04-01T04:34:13.758544
{ "authors": [ "bletham", "marishadorosh", "seanjtaylor" ], "repo": "facebookincubator/prophet", "url": "https://github.com/facebookincubator/prophet/issues/16", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
2134394130
Support reading Iceberg equality delete files (Design) Description In https://github.com/facebookincubator/velox/pull/7847we introduced IcebergSplitReader and the support of reading positional delete files. In this doc we will discuss the implementation of reading equality delete files. Iceberg Equality Deletes Overview A general introduction of equality delete files can be found at https://iceberg.apache.org/spec/#equality-delete-files. Some key takeaways: An equality delete file can contain multiple fields(could be sub-fields), and the values for the fields in the same row are in AND relationship. E.g. The following equality delete file equality_ids=[1, 3] 1: id | 2: category | 3: name -------|-------------|--------- 3 | NULL | Grizzly means: - A row is deleted if (id = 3 AND name = 'Grizzly') is true. Or - A row is selected if (id <> 3 OR name <> 'Grizzly') is true The equality delete field value could be NULL, which means a row is deleted if that field is NULL. equality_ids=[2] 1: id | 2: category | 3: name -------|-------------|--------- 3 | NULL | Grizzly The expression specifies: - A row is deleted if category IS NULL. Or - A row is selected if category IS NOT NULL An equality delete file could contain multiple rows. equality_ids=[1, 3] 1: id | 2: category | 3: name -------|-------------|--------- 3 | NULL | Grizzly 5 | Bear | Polar means: - A row is deleted if (id = 3 AND name = 'Grizzly') OR (id = 5 AND name = 'Polar') is true. Or - A row is selected if (id <> 3 OR name <> 'Grizzly') AND (id <> 5 OR name <> 'Polar') is true A split can contain multiple equality or positional delete files, and a row is deleted if any row expression in these delete files is true. E.g. a split may come with 3 delete files: Equality delete file 1 equality_ids=[1, 3] 1: id | 2: category | 3: name -------|-------------|--------- 3 | NULL | Grizzly Equality delete file 2 equality_ids=[3] `1: id | 2: category | 3: name -------|-------------|--------- 1 | NULL | Polar Positional delete file 1 100 101 means - a row is deleted iff (id = 3 AND name = 'Grizzly') OR (name = 'Polar') OR (row_in_file = 100) OR (row_in_file = 101) - a row is selected iff (id <> 3 OR name <> 'Grizzly') AND (name <> 'Polar') AND(row_in_file <>100) AND (row_in_file <> 100) A split can contain many equality and positional delete files. Design considerations Build Hash Tables or Filters/FilterFunctions? The equality delete files can be interpreted as logical expressions and become the remaining filters that can be evaluated after all rows in a batch is read out into the result vectors. Or alternatively, they can be used to construct a number of hash tables that will be probed against after all rows are read into the result vectors. Suppose the equality delete file contains the following information: equality_ids=[2, 3] 1: id | 2: category | 3: name -------|-------------|--------- 1 | Bear | Grizzly 3 | Bear | Brown It means - a row is deleted iff (category = 'Bear' AND name = 'Grizzly') OR (category = 'Bear' AND name = 'Polar') - a row is selected iff (category <> 'Bear' OR name <> 'Grizzly') AND (category <> 'Bear' OR name <> 'Polar') To build the hash tables, we will need to concatenate the hash values of column 2 and 3 together, and the hash table will contain two hash values for 'Bear##Grizzly' and 'Bear##Brown'. Then in the matching phase, the hash values of column 2 and 3 for all the rows in the output RowVectors would be calculated and concatenated before probing the hash table. If it's not a match, it means this row was definitely not deleted; if it is a match, then the row needs to be compared with the original delete values to confirm if it's really deleted. Only when all the values are the same the row shall be confirmed to have been removed. Note that the final comparison is necessary, because there is still a very small possibility that hash probe collision could happen, especially when there are many columns involved. Note that creating hash tables on single columns is not correct without additional processing. For example, suppose the base file is as follows: 1: id | 2: category | 3: name -------|-------------|--------- 1 | Bear | Grizzly 2 | Bear | Brown 3 | Bear | Polar 4 | Dog | Brown If we build one hash table on the second column "category" that contains {'Bear'}, and another hash table on name that contains {'Grizzly', 'Brown'}, then probing the category hash table to exclude rows with category = 'Bear' would incorrectly remove row 3, probing the name hash table to exclude 'Grizzly' and 'Brown' would incorrectly remove row 4. Taking logical AND or OR on the two probe results is also incorrect. Now let's take one step back, and build the hashtables on single values. So we have hashtable A on "category" with one value 'Bear', and another hash table B on "name" with value "Grizzly" and another hash table C on "name" with value "Brown", then a row would pass if (category <> 'Bear' OR name <> 'Grizzly') AND (category <> 'Bear' OR name <> 'Polar') by probing hash table A twice, and hash table B and C once, then compute the logical ORs and ANDs. However, this is no difference than just comparing the values and no hash tables are actually needed. The other way is to compile these delete values into logical expressions that can be executed as the remaining filter functions, or even domain filters that can be pushed down to the base file reader. This can be more efficient than taking the hash table approach. Firstly, filter pushdown can eliminate a lot of decoding costs; Secondly, computing and concatenating the hash values for all rows are very expensive. In fact, it is much slower than just performing simple comparisons on single column values. The latter could be efficiently done by SIMD operations, while the hash value computation cannot be efficiently implemented using SIMD. And lastly, the values need to be compared anyways even for the hash table approach. We should also notice that if we convert them into logical expressions as remaining filters, the existing Velox expression evaluation implementation can automatically choose the best evaulation strategy, e.g. whether to build hash tables, or do efficient SIMD logical comparisons when it sees fit. This is much more flexible than building fixed hash tables in the connector DataSources. In many cases, the ExpressionEvaluator can choose more efficient way to evaluate the equivalent expressions. Today, it's already very easy to construct the logical expressions, and for the equality delete to work, there is no additional code needed beyond the expression constructions what's so ever. The existing Velox data source and readers can already handle them, so the implementation would be fairly simple. Plus, we can additionally improve existing filter function / expression evaluation implementations that can potentially benefit other components of the driver pipeline in the future. So we propose to choose the remaining filter path and just convert the equality delete files into filters and filter functions. Where and How to open the equality delete files Query engines like Presto or Spark usually have some central coordinators, where the distributed plan and splits are created. The splits would then be sent to the workers and executed there using the Velox stack. A query may issue many splits for each table, and each of them may include many (may be up to hundreds) delete files. We have the choice to open the delete files in the coordinator, and/or in the workers. There are some basic design considerations here: Native workers running Velox need to have the ability to read both equality delete and positional delete files. Although it's possible for Prestissimo to open the (equality) delete files on the coordinator(s), we cannot assume other engines can process the equality delete files internally by themselves. The Iceberg splits come with a list of delete file paths and they could be positional or equality or both. With the normal scheduling implementations in most engines, the splits would be directly sent to the workers for local executions. An engine would need a fairly large amount of special handling to break this procedure and open the delete files, update the distributed query plan, creating filters that can be pushed down, or even change the scan into a join, etc. before sending out the splits. This makes the engines logic much more complex and the integration with Velox much harder. Opening files is a very expensive operation. Opening all delete files on the coordinator may become the bottleneck of the system. Even though we could cache the parsed expressions from each distinct delete file, opening hundreds of them on a single or a small number of coordinator nodes is still not practical. Based on these considerations, I think we need to implement reading equality deletes in Velox. However it doesn't mean we cannot open some of the equality delete files on the coordinator for optimization purpose. But that optimization should not be mandatory for the engines built on top of Velox. Performance Considerations We want to push down the filters as much, and as deep as possible. By pushing down the filters to the readers or even decoder's level, we can efficiently avoid the costs of decoding skipped rows, or even save some decompression costs. This savings could be huge if the selectivity rate is very small. We shall notice that some of the equality delete files and all positional delete files could be converted to TupleDomain filters or initial row numbers that can be pushed to the readers. In order to achieve this, we will need to extract the parts that can be pushed down, and guarantee the rest parts are evaluated or tested correctly. We want to avoid opening the delete files as much as possible A split may include hundreds of delete files, and a worker could receive many splits with the same set of delete files. Ideally, each delete file should be opened only once on one worker. This is because 1) opening files is expensive 2) expression compilation, or building hashtables that can be later probed are also not cheap. There're a couple of ways to achieve this Building a hash table for each HiveDataSource, or a long living cache on the compiled expressions on each node Convert the scan with equality delete files to a broadcast join, with the delete files becoming one data source, and the base file becoming another data source. This shows good improvements in Presto, but it also misses the opportunities of filter pushdown for reading the base file, more efficient evaluation comparing to hash joins Cross file filters and filter functions merge ExpressionEvaluator's ability to extract common sub-expressions logical expression simplifications Simpler plan shape, less data movement No additional data size limit for broadcast joins We want to reduce the amount of expression evaluation as much as possible. We have shown that the equality delete files can be interpreted as some conjunctive logical expressions. However, logical expression evaluations are also expensive when the expression contains many terms. We notice that Velox can already extract common sub expressions and flatten adjacent logical expressions, but the more general logical expression simplifications is still not implemented. Nonetheless, there are some ways to simplify the expressions for some simple cases for Iceberg. We will discuss them later. Design EqualityDeleteFileReader We will be introducing the EqualityDeleteFileReader class, and each reader is responsible for opening one equality delete file. The content will be read in batches, and for each batch, the logical expressions will be built and merged with existing remainingFilter in the HiveDataSource. The equality delete file schema is not fixed and can only be known at query run time. The equality Ids and the base file schema are used together to get the output row type and build the ScanSpec for the equality delete file. The equality Ids are the same as the id in Velox TypeWithId, and therefore we can directly use dwio::common::typeutils::buildSelectedType() to get the delete file schema. Note that this Id is not necessarily from a primitive type column, but could also be a sub-field from a complex type column. For example, deleting from an ARRAY[INTEGER] column c where c[i]=5 can also be expressed as an equality delete file. The field Ids for this column is 0: root 1: ARRAY 2: INTEGER Therefore the equality id for this predicate is 2, and the content of the equality delete file is value 5. Once we read the delete values, we can build the ExprSet and add it to the existing remainingFilterExprSet_ in HiveDataSource. Then the expresionEvaluator_ in HiveDataSource will evaluate them after all relevant vectors are loaded. There are two ways to add the newly created ExprSet to the existing remainingFilterExprSet: By conjuncting with the Expr in remainingFilterExprSet_ By adding the Expr to the array in remainingFilterExprSet_ Note that the current HiveDataSource assumes remainingFilterExprSet_ has only one Expr, and the owned SimpleExpressionEvaluator only evaluates the first Expr in an ExprSet. There're a couple of facts that we discover: SimpleExpressionEvaluator is only used in TableScan and HiveConnector The remainingFilterExprSet_ would always be a special kind of ExprSet that it only contains logical expressions. While I think SimpleExpressionEvaluator should indeed evaluate all Exprs in the passed in ExprSet, I think we can alternative create a new LogicalExpressionEvaluator, in which we can have special logical expression evaluation improvements in the future. Then it seems that adding the new Expr to remainingFilterExprSet_ as an array element is the most clean and simple way. Extraction of domain filters When the equality delete file only has one field, we can extract it as a domain filter. Such filter can be pushed down to the readers and decoders, where performance savings could happen. In this case we will create a NOT IN filter for it. This is done in connector::hive::iceberg::FilterUtil::createNotInFilter(), which in turn would call into the utility functions in common:Filter.h/cpp. The values will be de-duplicated and nulls will be treated separately. Velox can optimize it into different kinds of filter, e.g. a range filter when there is only one value. Note that we need to verify the field is not a sub-field, since Velox currently doesn't support pushing down filters to sub-fields. This restriction will be removed once Velox supports sub-field filter pushdowns. An equality delete file with multiple fields cannot be pushed down as domain filters at this moment, no matter if there's a single row or multiple rows. E.g. this delete file can be interpreted as id <> 3 || name <> 'Grizzly'. Currently Velox does not support pushing down disjunctives but we may do it in the future. equality_ids=[1, 3] 1: id | 2: category | 3: name -------|-------------|--------- 3 | NULL | Grizzly Domain Filter Merge A split may come with multiple equality delete files. Some of them may have the same schema. If they all have the same single field, the extracted domain filters will be deduped and merged with the existing one. E.g. Equality delete file 1 equality_ids=[2] 2: category --------------- mouse Equality delete file 2 equality_ids=[2] 2: category --------------- bear mouse The domain filter built from these 2 files will be category NOT IN {'bear', 'mouse'} This is using the mergeWith api in the Filter class. Remaining Filter Function Merge If the equality delete files have the same schema but not the single field, For example Equality delete file 1 equality_ids=[1, 3] 1: id | 2: category | 3: name -------|-------------|--------- 3 | NULL | Winnie Equality delete file 2 equality_ids=[1, 3] 1: id | 2: category | 3: name -------|-------------|--------- 4 | NULL | Micky 3 | NULL | Winnie This will create 2 Expr in the final ExprSet: - (`id <> 3 || name <> 'Winnie') - (`id <> 3 || name <> 'Winnie') && (`id <> 4 || name <> 'Micky') Today Velox supports common sub-expressions recognition in the ExpressionEvaluator, and such expression would be evaluated only once. In this example (`id <> 3 || name <> 'Winnie') evaluation result would be cached internally and does not need to be evaluated twice. Logical Expression Simplification As far as I understand, Velox can do logical expression flattening, but still can't automatically simplify the logical expression. For example, the expression a AND (b AND (c AND d)) would be flattened as AND(a,b,c,d), but a AND (a OR b) cannot be automatically simplified to a, therefore to evaluate a AND (a OR b), a and b will both be evaluated, and one AND and one OR operation need to be performed. While we hope to improve logical expression simplification in the future, we can still do some simple improvements for Iceberg now. An Iceberg split can come with multiple equality delete files and their schemas could have overlaps. For example Equality delete file 1 equality_ids=[1, 2, 3] 1: id | 2: category | 3: name -------|-------------|--------- 1 | mouse | Micky 2 | mouse | Minnie 3 | bear | Winnie 4 | bear | Betty Equality delete file 2 equality_ids=[2] 2: category --------------- mouse Equality delete file 3 equality_ids=[2, 3] 2: category | 3: name ----------------|------------- bear | Winnie We see that equality delete file 2 is on the category column and would remove all tuples with value mouse. This means that the first two rows in equality delete file 1 are already contained and doesn’t need to be read or compiled. Similarly, the single row in file 3 contains row 3 in file 1, therefore row 3 in file 1 doesn’t need to be read or compiled. The simplified delete files are like the follows: equality_ids=[1, 2, 3] 1: id | 2: category | 3: name -------|-------------|--------- 4 | bear | Betty and equality_ids=[2] 2: category --------------- mouse and equality_ids=[2, 3] 2: category | 3: name ----------------|------------- bear | Winnie With this simplification, the resulted expression would be simpler and the evaluation cost will be reduced. When the delete file only has one field, the domain filter built from it can be used as a filter when reading other equality delete files whose fields include this one. In the above example, category <> 'mouse' can be pushed to file 1, whose row 1 and 2 would be filtered out. This not only helps final expression evaluation, but also improve the read performance for reading file 1. If the delete file has more than 1 field, the situation is more complex. In the above example, file 3 would be compiled to category <> 'bear' OR name <> 'Winnie, but it cannot be pushed to file 1 nor the base file directly because it's a disjunctive expression. So far Velox only supports domain filters in conjunctive expressions. So for now we will only use single field equality delete files to do the simplifications. For this, we will go over the equality ids from all equality delete files and pick all single field ones to read first. Then the filters will be pushed to the other equality file readers. In the future, we can even implement disjunctive expression push downs. For example category <> 'bear' OR name <> 'Winnie can be pushed to the SelectiveColumnReaders, with the category and name columns as a ColumnGroup. This will save the cost of having to read all values out before applying the filter function as a remaining filter, and the selectivity vector can be reused among them. Moreover, the reduction of rows from applying this filter directly on this ColumnGroup would benefit the reading of other columns later. Expression Caching We know that a unique HiveDataSource object is created for a unique TableScan operator, and the splits received by a HiveDataSource instance belong to the same query and same table. Additionally for Iceberg splits, they must be reading the same snapshot of an Iceberg table. When the HiveDataSource receives a new Iceberg split with some equality delete files, it would create a new IcebergSplitReader, which would open the delete files. If the equality delete file can be interpreted into some domain filters or filter functions, the scanSpec_ and remainingFilterExprSet_ in HIveDataSource may need to be updated. Currently, the Iceberg library selects the qualified data and delete files based on partitions and snapshot Ids or transaction sequence numbers. For a single transaction, the snapshot is fixed, and all delete files from the same partition would go with the base data files when the splits are enumerated. So we can assume for now that all splits received from the same partition are the same for a single HiveDataSource. However, the delete files for different partitions could be different, and the splits from multiple partitions could arrive out of order. If we updated the scanSpec_ and remainingFilterExprSet_ for previous partition, we will need to restore them back to the original before applying the current set of delete files. As the first implementation, we will make a copy of these objects in the IcebergSplitReader and restore them back when the IcebergSplitReader is destructed. In some user's workloads, the deletions are quite frequent, and the number of delete files coming with a split for a subsequent SELECT query can be many. For all splits in a partition, the delete files may be the same. We don't want to repeatedly read such equality delete files for every split a HiveDataSource needs to handle. One way of overcoming this is to build an expression cache. There are 2 levels of the caching ideas: A hash table in HiveDataSource A process wide cache for all Iceberg scans. In 1, the key of the hash table is <partition, snapshotId> and the values are the compiled filters and expressions. In 2, the key of the cache is <table, partition, snapshotId> and the values are the compiled filters and expressions. To avoid excessive contentions, we can divide the cache into multiple levels. The implementation will be adjusted with more experiments and observations of the customer workloads in the future. If the Iceberg library changes its TableScan or FileScan in the future and can additionally prune the delete files based on each individual base data files, we will need to change the cache keys and add the information for the base file. We will work on caching in the future when we understands the workloads better. LogicalExpressionEvaluator Improvements Currently the remaining filter is evaluated in HiveDataSource::evaluateRemainingFilter() vector_size_t HiveDataSource::evaluateRemainingFilter(RowVectorPtr& rowVector) { … expressionEvaluator_->evaluate( remainingFilterExprSet_.get(), filterRows_, *rowVector, filterResult_); auto res = exec::processFilterResults( filterResult_, filterRows_, filterEvalCtx_, pool_); return res; } This code evaluates the remainingFilterExprSet_ as a general expression instead of a special logical expression, and would put the result of the Expr's in remainingFilterExprSet_ in a FlatVector as bool type, then processFilterResults() would perform logical AND/OR on these vectors. This incurs additional memory copies. Moreover, it contains special handling of nulls, while the logical expressions would NOT produce NULLs at all, so this part of cost can be saved as well. Our newly introduced LogicalExpressionEvaluator will have its own evaluate() implementation that is more performant for logical expressions. Testing Prestissimo End To End Tests In addition to unit tests we will have TPCH, TPCDS built-in tests in Presto IcebergExternalWorkerQueryRunner Microbenchmarks We will build microbenchmarks in Velox and Presto and cover both delete files. cc @tdcmeehan @nmahadevuni Agree that this must be done in worker/Velox not in coordinator We need to support multi-column ID pushdown, so build a hash table with multi-key support is needed. The remaining filter method only works for single column key. @tdcmeehan Thanks for the explanation, so from the point of view of execution engine that can be used in both batch and streaming cases, equality deletion is used more frequently in row oriented formats with keys, e.g. Avro? And Avro only has single column key, so the priority is not high to support multi column key deletion. However I think as a general execution engine, we need to get it right (at least at design level) in the first place, so that if someone comes with a row-oriented format with multi-column keys, we can cope with it without major change. In this sense I still think investment in proper support of multi-column keys is worth doing. Also another point is in big company sometimes the producer of the data is not aware of the consumers. It's not uncommon that streaming data is stored in data warehouse and later read by a different tool. So on read side we need to be as flexible as possible. @Yuhta I don't think it's necessarily correlated to the underlying file format, it's really whether or not the system generating the deletes wants or is capable of performing a table scan. This is because positional deletes require the position, which requires a table scan to determine the row that is to be deleted. I think this applies equally to any file format. To be clear, I believe it is beneficial and useful for Velox to support reading equality deletes as it is consistent with Velox's mission to be a pervasive execution layer in query engines. I am simply adding some context on when one typically sees equality delete files in real world settings. We need to support multi-column ID pushdown, so build a hash table with multi-key support is needed. The remaining filter method only works for single column key. I can work on adding a common interface for multi-key hash table in mutation pushdown. @Yuhta The remaining filter approach already supports multi-column expressions, e.g. a <> 1 OR b <> 1. My draft PR https://github.com/facebookincubator/velox/pull/8728 already works for that case and there is a test for multiple column delete values. And as I explained in "Build Hash Tables or Filters/FilterFunctions?" section, I believe the remaining filter approach is more advantageous than building additional mutation hash tables in all dimensions including performance, implementation easiness, and code cleanliness. I don't think you need to add any new interface for multi-key hash table if you just go with the remaining filter way. Also, correct me if I understood this wrong: I think the "Disjunctive(OR) predicate pushdown" I mentioned is different than your "multi-column ID pushdown". The essential point is to push the OR predicates to the ColumnReaders and decoders as domain filters, so the benefit of filter pushdown can be honored. E.g. In a predicate like (a = 1 OR b = 1) AND (c = 1), today we can only push down c = 1, but my idea is that we can also push (a = 1 OR b = 1) down to the ColumnReader level in the future. Of course this needs fundamental changes in the current reader implementations. It was an idea emerged while I was working on Iceberg, and I don't think any other engines have it. I want to try it in the future but not the near future. Whereas the "multi-column ID pushdown" you mentioned seems to be AFTER all data is read since you mentioned you wanted to build hash tables in mutation object, thus won't have the benefit of filter pushdown. Thanks @tdcmeehan for the background introduction. While not being able to do a scan WAS the reason why some engines produce equality delete files, the performance benefit will be another major reason why users want to use equality deletes in the future. I believe equality deletes WILL out-perform the positional deletes after we implement it in Prestissimo, because 1) we will save a big scan and semi join in the delete query. 2) we can push down some of the equality delete as domain filters to Velox Column readers, plus some other optimizations that are specific to equality delete files only. Given the fact that many engines scan is not as efficient, making the "merge" happen in Velox/Presissimo will have better performance. Also the dynamic filter support is limited nowadays, but for equality delete, we can definitely pushdown domain filters while not worrying about the data size etc. So we will use equality delete in TPCDS publication and implement equality delete in Prestissimo in the next step. @yingsu00 For single column key we can push it down to decoders, but I don't think you can do the same for multi-column keys due to the correlation between columns. Putting huge list of ORs in remaining filter would just destroy the performance. So For single key, we merge a filter to the corresponding column, which will be pushdown to decoder level For multi-key, we need to build the hash table and filter after we read all the key columns @yingsu00 For single column key we can push it down to decoders, but I don't think you can do the same for multi-column keys due to the correlation between columns. Putting huge list of ORs in remaining filter would just destroy the performance. So For single-column key, we merge a filter to the corresponding column, which will be pushdown to decoder level For multi-column key, we need to build the hash table and filter after we read all the key columns (or can you show me how do you push down OR filter on multiple columns?) @Yuhta Thanks for your questions. This is a very preliminary idea now. It's essentially to pushdown the expression evaluation into ColumnReaders for some special expressions like logical expressions with non-overlapping columns. e.g. for expression (b=1 OR c=1) AND a=1, right now we will have a domain filter a=1 which will be pushed down to the ColumnReader, and all rows passing a=1 would be decoded and extracted into a b vector and a c vector, then either build a hash table directly or utilize existing expression evaluation framework to evaluate (b=1 OR c=1). Note that this part is done relative less efficiently now, since the expression evaluation is aimed for all general cases, and calculating hash values for multiple columns is expensive. Also all rows, even those don't satisfy (b=1 OR c=1), would have to be decompressed, decoded and copied out in order to evaluate (b=1 OR c=1). Now, if we can create a ColumnReader to read a first(as is done today), and a GroupColumnReader that contains ColumnReader for b and c, then after a was read, the GroupColumnReader's read() function would be The inner ColumnReader for b would produce a bitmap for b=1, and the inner ColumnReader for c would produce a bitmap for c=1, note that it doesn't extract or copy the data out at this moment. If we push down the filter to the encoded data, we don't even need to decode the data now. The GroupColumnReader can directly OR the two bitmaps and produce the rowset passing (b=1 OR c=1). Unlike the hash table or general expression evaluation, this doesn't need to allocate new memory and can be done really fast. If b or c are not required to be extracted, we are done. Otherwise extract the data with the ORed bitmap. This may need to decode more data than step 1, but now the data is all in memory and extracting them now may still be a lot faster than reading them all and filter later. This has multiple benefits: Filter b=1 and c=1 can be pushed down on column b and c respectively, thus avoiding the cost to decode unnecessary data It may benefit reading the rest columns other than a,b,c, if any, if b=1 OR c=1 can remove a lot of rows. So I think it would benefit the performance generally, especially after we push the filters to ENCODED data. Even if we don't push the filters to ENCODED data, it may still be faster in a lot of cases. You said " Putting huge list of ORs in remaining filter would just destroy the performance", I think you meant the case that the number of values in the expression is much smaller than the number of row to be read, such that the bitmap may be larger than the hash table. But remember, you have to extract and copy all rows first, which is mostly larger than the bitmap itself. For each batch, we read at most 10,000 rows, and the bitmap for it would just be around 1KB for each column. Actually, the RowSet itself for each ColumnReader nowadays may be larger than that. Even your hash table may be larger than several KBs itself. And most importantly, calculating hash values on all relevant columns for all rows may be much more expensive. But I agree that in some cases building hash table or evaluating the remaining filter afterwards may be faster, e.g. when the equality delete file contains many many columns. So I think the execution strategy should be self-adapted at run time. And the criteria and policy shall be determined on extensive performance testing. Anyways, this idea is preliminary and needs a lot of refinement and generalization. I may try prototyping something next year, but not in this Iceberg work. But your feedback and discussion are very welcome! The inner ColumnReader for b would produce a bitmap for b=1, and the inner ColumnReader for c would produce a bitmap for c=1, note that it doesn't extract or copy the data out at this moment. If we push down the filter to the encoded data, we don't even need to decode the data now. At this point you already go through all the key data and pay almost the price of reading them. The saving from selective reading is mainly by skipping, in this case you cannot skip reading b according to the values you read in a, so it will not be much different from using a hash table, and the framework would be much more complex. The saving we are aiming for is on payload data, so pushing down to key column readers does not seem worth doing it except for single column. For the first implementation I would suggest we do the single column case only, since that is the thing everyone is agreed on, and covers most of the real world use cases. At this point you already go through all the key data and pay almost the full price of reading them (decompressing and decoding, Not quite. The GroupColumnReader does need to decompress the whole ColumnChunk, but it does NOT necessarily need to decode all data, if we can evaluate the filter on ENCODED data. you cannot skip reading c according to the values you read in b, so it will not be much different from using a hash table I think there will be a difference, but I don't have time to try it now so we can forget it. Whether to try it or not also depends on the portion of the remaining filter or hash table mutation cost in the whole query. If it's already fast then no need to do it. We will send a PR for IcebergReadBenchmark and it will cover the equality delete case. Then we will have more insights on where the time is spent. Also I don't see we can use this to speed up mutability. The method does not work well with tree of logic expression. How do you push down (b = 1 OR c = 1) AND (b = 2 OR c = 2)? The naive way is to have two GroupColumnReaders, one for (b = 1 OR c = 1) and the other for (b = 2 OR c = 2). Thus we'll have to read b and c twice, but we can skip some rows for (b = 2 OR c = 2), and also use filter on ENCODED data to avoid decoding all data. Then the improved version is to Get all distinct single columns and the predicates on them. In this example, b=1, b=2, c=1, c=2. Note that we don't need to care how they are joined together While reading a column, apply all relevant filters on the encoded data. In this example, we can apply b=1 and b=2 on b. The comparisons can be done with b loaded in some registers, so we don't need to load b twice. Then each of them will produce a SelectivityVector. Then perform the logical computations to get the final SelectivityVector, and extract the values if necessary. In this approach, each row uses 4 comparisons and 3 logical ops. This approach can also be applied to the LogicalExpressionEvaluator. The current general ExprSet evaluation can recogonize common expressions. If the expression is (b = 1 OR c = 1) AND (b = 1 OR c = 2), then b=1 is common and will only be executed once. But for (b = 1 OR c = 1) AND (b = 2 OR c = 2) there is no common expression, and both b and c would be read twice. I agree this approach has big limitations, it only works for logical expressions, and doesn't work for complex functions and those involve multi-columns like x+y > 1. But it for Iceberg it's good. Now let's consider the hash table approach for (b = 1 OR c = 1) AND (b = 2 OR c = 2), for this we'll need to build 4 hash tables b = 1, c =1, b = 2, c = 2, and the probe results will need to be ANDed and ORed to get the final result. To build the hash tables, you'll need to apply hash function on these 4 values(actually 2 distinct, 1 and 2, but you don't know if they're distinct). And to verify if a row satisfies, you will need to hash 2 values and use 4 probes. You can say if b=1 then don't need to probe b=2, but this if check is a perf killer. Alternatively we can convert it to disjunctive of conjunctives: (b=1 AND c=2) OR(c=1 AND b=2), then you need one hash table on b##c that contains two values 1##2 and 2##1. This looks better, and building the hash table requires 4 hash and some bit shift, and probing requires 2 hash and some bit shift for every row. If it's a match, you will need to compare again if the values really match. Let's be generous and don't cost the hash table build. Then for each row, it needs 2 hashes, 1 probe, and 2 comparisons. It may look faster than 4 comparisons and 3 logical ops, but hash is many times slower than comparison, and probe is also costly. While using the above approach, or the improved expression evaluation, the 4 comparisons and 3 logical ops can be done in a very simple loop and all in simple arrays. So for this simple case the hash table will be slower. But when there are many many values, e.g. (b = 1 OR c = 1) AND (b = 2 OR c = 2) AND (b = 3 OR c = 3) ... AND (b = 1,000,000 OR c = 2,000,000), I would expect the hash table approach be faster. For the first implementation I would suggest we do the single column case only, since that is the thing everyone is agreed on, and covers most of the real world use cases. Yes this is exactly what was done in the PR https://github.com/facebookincubator/velox/pull/8728. It only pushes single column filters and the rest are evaluated as remaining filters. All rest optimizations are not included in this PR. Your review is much appreciated. I'll ping you when the tests pass. Actually this particular optimization(push down disjunctives) is the last thing I may want to try, since it requires lots of code change and thus bigger risk. The other mentioned optimizations, e.g. logical expression simplifications and caching, will be tried first, if necessary. So we agree on this. if we can evaluate the filter on ENCODED data. That's the only way to achieve it. However I am not sure if the gain worth it, it might be still slower than just copying the key columns out and probing a hash table. (b = 1 OR c = 1) AND (b = 1 OR c = 2) Sorry I made a mistake, the relevant expression should be (b = 1 AND c = 1) OR (b = 2 AND c = 2). In this case we can use a single hash table supporting 2 column keys, but with the remaining filter approach we can only get a compromised filter on each column, and the filtering expression evaluation is still not avoidable. Yes this is exactly what was done in the PR https://github.com/facebookincubator/velox/pull/8728. It only pushes single column filters and the rest are evaluated as remaining filters. I would suggest we passing all these key information separately from presto_cpp into Velox. Then we can decide in Velox, whether we want to build a hash table using these keys, or we just convert them into remaining filter expression.
gharchive/issue
2024-02-14T13:47:26
2025-04-01T04:34:13.824709
{ "authors": [ "Yuhta", "tdcmeehan", "yingsu00" ], "repo": "facebookincubator/velox", "url": "https://github.com/facebookincubator/velox/issues/8748", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2496110964
Add sessionTimezone and adjustTimestampToTimezone to DWRF reader and writer options As suggested here This PR refactors the DRWF Reader and Writer related APIs so that we can get sessionTimezone and adjustTimestampToTimezone in DRWF's ColumnWriter and ColumnReader. No functionality change. @Yuhta The new commit resolves the conflict with the main branch. Please review it again if necessary. @wypb Would you rebase this PR so we can merge it? @wypb Would you rebase this PR so we can merge it? @mbasmanova I have synced the latest code, thanks. @wypb Can you rebase to the latest main so we can merge it? Hi @Yuhta I have synced the latest code. The code looks good now, can you resolve the conflicts? The code looks good now, can you resolve the conflicts? @Yuhta Done, Thank you for your review. Hi @xiaoxmeng Addressed all the comments, can you help review again? Thanks! @wypb There are some build errors @wypb There are some build errors @Yuhta I've already fixed it, thank you for your reply.
gharchive/pull-request
2024-08-30T03:19:31
2025-04-01T04:34:13.833152
{ "authors": [ "Yuhta", "mbasmanova", "wypb" ], "repo": "facebookincubator/velox", "url": "https://github.com/facebookincubator/velox/pull/10895", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1647245159
Add bit_xor Spark aggregate function Add bit_xor Spark aggregate function: https://spark.apache.org/docs/3.3.1/api/sql/#bit_xor. Extract common bitwise aggregate logic into lib/BitwiseAggregateBase.h Add SparkAggregationFuzzerTest.cpp to run fuzzer test. Fixes #4462 Hi @mbasmanova I move bitwise shared code into functions/lib for now. If we want to extract shared aggregate code into functions/lib/aggregates, we have to create a new static library like velox_functions_aggregate_common, and rename current static library name under functions/prestosql such as velox_aggregates to velox_functions_aggregates or velox_functions_prestosql_aggregates to make library naming more reasonable. It would better open another PR to do this code refactor if you like. Hi @mbasmanova Looks good to me but CI is failing. Please, take a look. CI failed due to network problem, I will trigger again when network be good. Let me run a 1 hour fuzzer test. @Yohahaha Import latest main to fix the builds - arrow urls have changed. This 1 hour fuzzer test is OK. ./velox/exec/tests/spark_aggregation_fuzzer_test --seed 123 --duration_sec 3600 --logtostderr=1 --minloglevel=0 --repro_persist_path=/tmp/aggregate_fuzzer_repro --num_batches=1 I0404 00:20:49.380654 1456174 AggregationFuzzer.cpp:604] ==============================> Done with iteration 42726 I0404 00:20:49.380683 1456174 AggregationFuzzer.cpp:1049] Total functions tested: 1 I0404 00:20:49.380685 1456174 AggregationFuzzer.cpp:1050] Total masked aggregations: 7615 (17.82%) I0404 00:20:49.380694 1456174 AggregationFuzzer.cpp:1052] Total global aggregations: 3851 (9.01%) I0404 00:20:49.380698 1456174 AggregationFuzzer.cpp:1054] Total group-by aggregations: 34650 (81.10%) I0404 00:20:49.380702 1456174 AggregationFuzzer.cpp:1056] Total distinct aggregations: 0 (0.00%) I0404 00:20:49.380707 1456174 AggregationFuzzer.cpp:1058] Total window expressions: 4226 (9.89%) I0404 00:20:49.380709 1456174 AggregationFuzzer.cpp:1060] Total aggregations verified against DuckDB: 30482 (71.34%) I0404 00:20:49.380713 1456174 AggregationFuzzer.cpp:1062] Total failed aggregations: 0 (0.00%) But I got a error of Window. ./velox/exec/tests/spark_aggregation_fuzzer_test --seed 1848296956 --duration_sec 10 --logtostderr=1 --minloglevel=0 --repro_persist_path=/tmp/aggregate_fuzzer_repro I0404 09:55:42.165649 3411656 AggregationFuzzer.cpp:251] Total functions: 1 (4 signatures) I0404 09:55:42.165830 3411656 AggregationFuzzer.cpp:253] Functions with at least one supported signature: 1 (100.00%) I0404 09:55:42.165841 3411656 AggregationFuzzer.cpp:257] Functions with no supported signature: 0 (0.00%) I0404 09:55:42.165845 3411656 AggregationFuzzer.cpp:259] Supported function signatures: 4 (100.00%) I0404 09:55:42.165848 3411656 AggregationFuzzer.cpp:263] Unsupported function signatures: 0 (0.00%) I0404 09:55:42.165864 3411656 AggregationFuzzer.cpp:532] ==============================> Started iteration 0 (seed: 1848296956) I0404 09:55:42.175367 3411656 AggregationFuzzer.cpp:624] Executing query plan: -- Window[partition by [p0] order by [s0 ASC NULLS LAST, s1 ASC NULLS LAST, s2 ASC NULLS LAST, s3 ASC NULLS LAST, s4 ASC NULLS LAST] w0 := bit_xor(ROW["c0"]) RANGE between UNBOUNDED PRECEDING and CURRENT ROW] -> c0:INTEGER, p0:BOOLEAN, s0:VARBINARY, s1:DATE, s2:TINYINT, s3:INTEGER, s4:VARCHAR, row_number:BIGINT, w0:INTEGER -- Values[1000 rows in 10 vectors] -> c0:INTEGER, p0:BOOLEAN, s0:VARBINARY, s1:DATE, s2:TINYINT, s3:INTEGER, s4:VARCHAR, row_number:BIGINT I0404 09:55:42.192199 3411672 Task.cpp:825] All drivers (1) finished for task test_cursor 1 after running for 17 ms. I0404 09:55:42.192220 3411672 Task.cpp:1474] Terminating task test_cursor 1 with state Finished after running for 17 ms. I0404 09:55:42.194061 3411656 AggregationFuzzer.cpp:639] [ROW ROW<c0:INTEGER,p0:BOOLEAN,s0:VARBINARY,s1:DATE,s2:TINYINT,s3:INTEGER,s4:VARCHAR,row_number:BIGINT,w0:INTEGER>: 1000 elements, no nulls] I0404 09:55:42.194113 3411656 AggregationFuzzer.cpp:893] SELECT c0, p0, s0, s1, s2, s3, s4, row_number, bit_xor(c0) OVER (partition by p0 order by s0, s1, s2, s3, s4) FROM tmp ../velox/exec/tests/utils/QueryAssertions.cpp:1076: Failure Failed Expected 1000, got 1000 1 extra rows, 1 missing rows 1 of extra rows: 1100464885 | false | "thK4&Wn:Alavd)^7'sLNL(L5v7*\\DnfcJSZ$~H&_q.bLD!PuAL+>!|PLiSryw8$" | "909831-02-23" | 4 | 638228802 | null | 570 | 458356396 1 of missing rows: 1100464885 | false | "thK4&Wn:Alavd)^7'sLNL(L5v7*\\DnfcJSZ$~H&_q.bLD!PuAL+>!|PLiSryw8$" | "909831-02-23" | 4 | 638228802 | null | 570 | 1461150393 Unexpected results E0404 09:55:42.450125 3411656 Exceptions.h:68] Line: ../velox/exec/tests/AggregationFuzzer.cpp:961, Function:verifyWindow, Expression: assertEqualResults(expectedResult.value(), {resultOrError.result}) Velox and DuckDB results don't match, Source: RUNTIME, ErrorCode: INVALID_STATE I0404 09:55:42.451520 3411656 AggregationFuzzer.cpp:452] Persisted input: /tmp/aggregate_fuzzer_repro/velox_vector_ojhYVB and plan: /tmp/aggregate_fuzzer_repro/velox_plan_9RJKWC terminate called after throwing an instance of 'facebook::velox::VeloxRuntimeError' what(): Exception: VeloxRuntimeError Error Source: RUNTIME Error Code: INVALID_STATE Reason: Velox and DuckDB results don't match Retriable: False Expression: assertEqualResults(expectedResult.value(), {resultOrError.result}) Function: verifyWindow File: ../velox/exec/tests/AggregationFuzzer.cpp Line: 961 I would find the reason and try to fix it. cc @mbasmanova But I got a error of Window. ./velox/exec/tests/spark_aggregation_fuzzer_test --seed 1848296956 --duration_sec 10 --logtostderr=1 --minloglevel=0 --repro_persist_path=/tmp/aggregate_fuzzer_repro As #4502 discussed, aggregate function's result for peer rows are same, I also verified this behavior in DuckDB. But the error of above fuzzer test is DuckDB in Velox for peer rows return different results. DuckDB results: 2035041034 false "rD~FT-{@{oAoaXBj`wk1[mw,fI0YnHwU<VGU,9d0O`Zx" null 47 755690227 "Nq~w3lCr}69\"K%t&nYm|6i4?zMILjmCZZqS9z`,G`GGd~>`^JFm`]AjY;!S/!QDk" 872 377528396 1100464885 false "thK4&Wn:Alavd)^7'sLNL(L5v7*\\DnfcJSZ$~H&_q.bLD!PuAL+>!|PLiSryw8$" "909831-02-23" 4 638228802 null 570 1461150393 1279693845 false "thK4&Wn:Alavd)^7'sLNL(L5v7*\\DnfcJSZ$~H&_q.bLD!PuAL+>!|PLiSryw8$" "909831-02-23" 4 638228802 null 585 458356396 Velox results: 467: {2035041034, false, rD~FT-{@{oAoaXBj`wk1[mw,fI0YnHwU<VGU,9d0O`Zx, null, 47, 755690227, Nq~w3lCr}69"K%t&nYm|6i4?zMILjmCZZqS9z`,G`GGd~>`^JFm`]AjY;!S/!QDk, 872, 377528396} 468: {1100464885, false, thK4&Wn:Alavd)^7'sLNL(L5v7*\DnfcJSZ$~H&_q.bLD!PuAL+>!|PLiSryw8$, 909831-02-23, 4, 638228802, null, 570, 458356396} 469: {1279693845, false, thK4&Wn:Alavd)^7'sLNL(L5v7*\DnfcJSZ$~H&_q.bLD!PuAL+>!|PLiSryw8$, 909831-02-23, 4, 638228802, null, 585, 458356396} Penultimate value of row is pre-generated row number, let me use illustrate the correct computation procedure: row 570 and 585 have same order by keys, so they are peer rows. compute aggregate functions for peer rows: 1100464885(row 570) bit_xor 1279693845(row 585) = 231823072 compute peer rows' agg result with previous non-peer row once: 377528396(row 872) bit_xor 231823072 = 458356396 apply above result to each peer row I'm not sure how DuckDB in Velox works, @mbasmanova could you help take a look on this? @Yohahaha It looks like a bug in the fuzzer. Velox plan specifies "RANGE between UNBOUNDED PRECEDING and CURRENT ROW" frame: -- Window[partition by [p0] order by [s0 ASC NULLS LAST, s1 ASC NULLS LAST, s2 ASC NULLS LAST, s3 ASC NULLS LAST, s4 ASC NULLS LAST] w0 := bit_xor(ROW["c0"]) RANGE between UNBOUNDED PRECEDING and CURRENT ROW] -> While DuckDB SQL doesn't specify the frame. In Presto, "RANGE between UNBOUNDED PRECEDING and CURRENT ROW" is the default frame, but it could be that in DuckDB the default is "ROWS ...". I looked at DuckDB docs, but could't find any mention of what the default frame is. SELECT c0, p0, s0, s1, s2, s3, s4, row_number, bit_xor(c0) OVER (partition by p0 order by s0, s1, s2, s3, s4) FROM tmp https://prestodb.io/docs/current/functions/window.html https://duckdb.org/docs/sql/window_functions.html I suggest to modify the fuzzer to add explicit frame clause to DuckDB SQL. This would be somewhere in makeDuckWindowSql in velox/exec/tests/AggregationFuzzer.cpp CC: @aditi-pandit Hi @mbasmanova DuckDB window default frame is RANGE, I test it manually. I still get same error when specify DuckDB SQL as SELECT c0, p0, s0, s1, s2, s3, s4, row_number, bit_xor(c0) OVER (partition by p0 order by s0, s1, s2, s3, s4 range between UNBOUNDED PRECEDING and current row) FROM tmp Maybe it is a bug in DuckDB. Consider creating an issue in https://github.com/duckdb/duckdb Maybe it is a bug in DuckDB. Consider creating an issue in https://github.com/duckdb/duckdb I got it. I think this PR is complete. @Yohahaha What should we do about the fuzzer failure? Would you like to follow-up with DuckDB folks to confirm whether there is a bug in DuckDB and perhaps get a fix? Would you like to follow-up with DuckDB folks to confirm whether there is a bug in DuckDB and perhaps get a fix? Sure, I would try to reproduce in DuckDB side and follow it. What should we do about the fuzzer failure? Could we separate window fuzzer from aggregate fuzzer? add a WindowFuzzerTest? Could we separate window fuzzer from aggregate fuzzer? add a WindowFuzzerTest? How would that help? Hi @mbasmanova I see CI failed in Facebook Internal Linter, how can I find error log? @Yohahaha Its not possible to share those signals at this time - fyi in this case though these failures are internal to meta. @Yohahaha Its not possible to share those signals at this time - fyi in this case though these failures are internal to meta. Dose this CI failure blocking merge? if yes, how can I fix it? CI failed due to Check function signatures, but this PR should not cause below problems. Signature removed: [root['to_big_endian_32'], root['from_big_endian_32'], root['to_big_endian_64'], root['from_big_endian_64'], root['from_base64url']] Found differences: {'dictionary_item_removed': [root['to_big_endian_32'], root['from_big_endian_32'], root['to_big_endian_64'], root['from_big_endian_64'], root['from_base64url']]} @kgpai Could you help to take a look? Hi @Yohahaha , Just rebase from main and push , the error should go away. I will have a fix out for that soon. Dose this CI failure blocking merge? if yes, how can I fix it? No someone at Meta will work on that and merge it - no need to worry about that.
gharchive/pull-request
2023-03-30T09:26:50
2025-04-01T04:34:13.852016
{ "authors": [ "Yohahaha", "kgpai", "mbasmanova" ], "repo": "facebookincubator/velox", "url": "https://github.com/facebookincubator/velox/pull/4467", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1676139733
help,AttributeError: 'NoneType' object has no attribute 'decode' os: ubuntu1~18.04 log in to the server without display remotely using ssh I meet error Python 3.8.13 (default, Oct 21 2022, 23:50:54) [GCC 11.2.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. from animated_drawings import render render.start('./examples/config/mvc/interactive_window_example.yaml') /home/ubuntu/miniconda3/envs/animated_drawings/lib/python3.8/site-packages/glfw/init.py:912: GLFWError: (65544) b'X11: The DISPLAY environment variable is missing' warnings.warn(message, GLFWError) /home/ubuntu/miniconda3/envs/animated_drawings/lib/python3.8/site-packages/glfw/init.py:912: GLFWError: (65537) b'The GLFW library is not initialized' warnings.warn(message, GLFWError) Traceback (most recent call last): File "", line 1, in File "/home/ubuntu/AnimatedDrawings/animated_drawings/render.py", line 17, in start view = View.create_view(cfg.view) File "/home/ubuntu/AnimatedDrawings/animated_drawings/view/view.py", line 47, in create_view return WindowView(view_cfg) File "/home/ubuntu/AnimatedDrawings/animated_drawings/view/window_view.py", line 34, in init self._create_window(*cfg.window_dimensions) # pyright: ignore[reportGeneralTypeIssues] File "/home/ubuntu/AnimatedDrawings/animated_drawings/view/window_view.py", line 126, in _create_window logging.info(f'OpenGL Version: {GL.glGetString(GL.GL_VERSION).decode()}') # pyright: ignore[reportGeneralTypeIssues] AttributeError: 'NoneType' object has no attribute 'decode' Try following the steps in this comment: https://github.com/facebookresearch/AnimatedDrawings/issues/99#issue-1669192538 Try following the steps in this comment: #99 (comment) Run the following commands: sudo apt-get install libosmesa6-dev freeglut3-dev sudo apt-get install libglfw3-dev libgles2-mesa-dev sudo apt-get install libosmesa6 export PYOPENGL_PLATFORM=osmesa conda install -c conda-forge libstdcxx-ng conda install -c conda-forge libstdcxx-ng=12 export DISPLAY=":1" or export DISPLAY=":0" Still failed, I have executed all steps from scratch Does headless rendering with mesa work? Same promblem. How to solve it? Hmm, I got this problem too, after following all steps in #99 hey , and now the problem has been resolved ? hey , and now the problem has been resolved ? @tianruci open file ./examples/annotations_to_animation.py add one line code : 'view': {'USE_MESA':True}, # use MESA for headless server in here # create mvc config mvc_cfg = { 'view': {'USE_MESA':True}, # use MESA for headless server 'scene': {'ANIMATED_CHARACTERS': [animated_drawing_dict]}, # add the character to the scene ... }
gharchive/issue
2023-04-20T07:14:34
2025-04-01T04:34:13.864316
{ "authors": [ "ARDUJS", "JiepengTan", "hjessmith", "howardgriffin", "httzipdev", "tianruci" ], "repo": "facebookresearch/AnimatedDrawings", "url": "https://github.com/facebookresearch/AnimatedDrawings/issues/128", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1210817686
Embedding tables smaller than number of unique tokens The number of rows in an embedding table should be same as the number of unique categorical tokens for that specific feature. However, I see that the embedding tables are smaller than the number of tokens. Here are the number of unique tokens and embedding table sizes per feature (for Kaggle dataset). | C1 | C2 | C3 | C4 | C5 | C6 | C7 | C8 | C9 | C10 | C11 | C12 | C13 | C14 | C15 | C16 | C17 | C18 | C19 | C20 | C21 | C22 | C23 | C24 | C25 | C26 -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- unique | 1460 | 583 | 10131226 | 2202607 | 305 | 23 | 12517 | 633 | 3 | 93145 | 5683 | 8351592 | 3194 | 27 | 14992 | 5461305 | 10 | 5652 | 2172 | 3 | 7046546 | 17 | 15 | 286180 | 104 | 142571 ln_emb | 1459 | 583 | 6373320 | 1977439 | 305 | 24 | 12513 | 633 | 3 | 92719 | 5681 | 5666264 | 3193 | 27 | 14986 | 4209367 | 10 | 5652 | 2173 | 4 | 5058596 | 18 | 15 | 282062 | 105 | 141594 diff | 1 | 0 | 3757906 | 225168 | 0 | -1 | 4 | 0 | 0 | 426 | 2 | 2685328 | 1 | 0 | 6 | 1251938 | 0 | 0 | -1 | -1 | 1987950 | -1 | 0 | 4118 | -1 | 977 As DLRM paper (https://arxiv.org/pdf/1906.00091.pdf) section 5.1 mentions that the model size is 540M parameters, that suggests that values in ln_emb are lesser than expected. I am leaving the max-ind-range to it's default value which is -1. Can you explain how is this possible? I am using the following command line. python dlrm_s_pytorch.py --arch-sparse-feature-size=16 --arch-mlp-bot="13-512-256-64-16" --arch-mlp-top="512-256-1" --data-generation=dataset --data-set=kaggle --raw-data-file=kaggle/train.txt --processed-data-file=kaggle/kaggleAdDisplayChallenge_processed.npz --loss-function=bce --round-targets=True --learning-rate=0.1 --mini-batch-size=1024 --print-freq=1024 --print-time --test-mini-batch-size=16384 --test-freq=10240 --use-gpu --test-num-workers=16 --mlperf-logging --nepochs 4 How exactly are you counting the unique indices (categorical tokens)? Is this across training, validation and test sets? @mnaumovfb : I counted number of unique tokens by loading the dataset (train.txt: 7 days of data) into a pandas dataframe and counting unique for each column. This is for the entire dataset (All 7 days). These many tokens will indeed lead to 540M (considering embedding vector length: 16) parameters for Kaggle dataset. I understand that ln_emb in the DLRM code only reflects the the unique tokens from the train dataset (first 6 days of data). But this leads to only around 380M parameters model. How is DLRM paper saying the Kaggle model has 540M parameters then? Model is built only using train dataset, right? First, the text file representing the data set has empty spaces/tabs in it for missing features. These features are transformed into token "0". This probably takes care of the "-1" discrepancies in your table. Second, in our particular case we take the training set and actually use the first 6 days for training, while splitting the last 7th day into validation and test sets. So, it's possible that not all of the tokens you see in the train.txt file might be counted towards the # of embeddings. That's why in some cases #of unique tokens > # of vectors in the embedding table. @mnaumovfb In that case, can you recheck the model size for Kaggle dataset reported in the paper. It should not be 540 Millions.
gharchive/issue
2022-04-21T10:15:56
2025-04-01T04:34:13.923885
{ "authors": [ "gopikrishnajha", "mnaumovfb" ], "repo": "facebookresearch/dlrm", "url": "https://github.com/facebookresearch/dlrm/issues/233", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1040793233
Minor cleanup to "cleaning APIs" and "batching APIs" Summary: Mark batching/unbatching as experimental API fillna -> fill_null dropna -> drop_null To be consistent with PyArrow API (plus today's behavior only drops null but not nan). Also note the semantic of df.drop_duplicates(subset) is similar to the following SQL aggregation: SELECT col1, col2, ARBITRARY(col3), ARBITRARY(col4) FROM ... GROUP BY col1, col2 here col1, col2 are in subset; col3, col4 are the rest columns. Differential Revision: D32051423 Landed as https://github.com/facebookresearch/torcharrow/commit/b9638e6b5c262081b3cb98e0ade663e33b34605a
gharchive/pull-request
2021-11-01T05:48:01
2025-04-01T04:34:13.982312
{ "authors": [ "wenleix" ], "repo": "facebookresearch/torcharrow", "url": "https://github.com/facebookresearch/torcharrow/pull/47", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
427605525
Include ownProps into state mapper Better matches the redux connect() api. Thanks @dakota! Will merge and release when home later tonight 🎉
gharchive/pull-request
2019-04-01T09:37:29
2025-04-01T04:34:14.014510
{ "authors": [ "dakota", "fahad19" ], "repo": "fahad19/proppy", "url": "https://github.com/fahad19/proppy/pull/41", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2358230242
feat: wrap tracebacks in JSON for easy detection Right now, multi-line tracebacks are spread out and ingested independently into the underlying logging system. This makes it hard to retrieve the entirety of a traceback without some hacky heuristics. To enable easy searching and retrieval of tracebacks, we catch all uncaught exceptions and log them as JSON, this way the whole traceback is in one single JSON line, which can be extracted by looking at the traceback key. An alternative would be to print the traceback in unicode_escape encoding and then to to decode the string during log retrieval. This would remove the need for a JSON wrapping. Signed-off-by: squat lserven@gmail.com weird, mypy is wrong here: https://github.com/fal-ai/fal/actions/runs/9567480026/job/26375314286?pr=245#step:4:96 traceback.format_exception accepts any Exception and the value and tb arguments are optional: https://docs.python.org/3/library/traceback.html#traceback.format_exception In my manual tests the tracebacks are logged perfectly in the desired format. Adding a # type: ignore for tests to pass. Weirdly mypy doesn't complain in my local editor.
gharchive/pull-request
2024-06-17T20:48:23
2025-04-01T04:34:14.079204
{ "authors": [ "squat" ], "repo": "fal-ai/fal", "url": "https://github.com/fal-ai/fal/pull/245", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
705169986
Fork using Vanara libs supporting lots of .NET versions Not really an issue, just an offer. I have forked the code and changed out the underlying COM classes for those in Vanara.PInvoke.FirewallApi. This lib supports .NET 2-4.x, .NET Core 2.0-3.1, and .NET Std 2.0. I then added those same supported options to the WindowsFirewallHelper assembly. If you're interested, I'm happy to help add in and support in your master. Hey David; happy to see you here; been using your TaskScheduler package for ages now. Thanks for the port. However, forgive me for not seeing how these changes actually solve any issue or bring something new to the table. This library is on NetStandard 2 now and therefore is useable by NetCore and other libraries targeting NetStandard. .Net2 support is interesting, especially since this library should in theory support Windows XP, however, I was thinking of removing this feature and all links to the legacy API anyway. All this along with the fact that this change unnecessary adds two new dependencies to the project (Vanara.PInvoke.FirewallApi and Vanara.PInvoke.Shared) while the whole ComPinvoke code for this library is quite small, makes me hesitant. Did you try it on NetCore2? Since CustomMarshaller and therefore EnumeratorToEnumVariantMarshaler is missing in NetCore2 and I can see that Vanara.PInvoke.FirewallApi actually uses CustomMarshaller for enumeration. Can Vanara helps with #32 is somehow? However, forgive me for not seeing how these changes actually solve any issue or bring something new to the table. You're right, nothing much new here. In fact, I started the port mostly so that I wouldn't have to write a ton of unit tests for my FirewallApi COM extraction. I have got this working earlier .NET Core versions. You can find the implementation of EnumeratorToEnumVariantMarshaler here which you could bring right into your code base. It does simplify all the collection COM classes. Feel free to just steal that part. You may also want to look at some of my optimizations in your helper classes. There were quite a few cases where a lot of code could be wrapped into a single Linq call. Can Vanara helps with #32 is somehow? Vanara does have multi-version support for the EventLog class in Vanara.Compatibility.EventLog which you could use similar to what I did for my TaskScheduler package. I was thinking of removing this feature and all links to the legacy API anyway. I tried doing this for TaskScheduler and had significant backlash last year. There were many still using both the legacy interface on XP and .NET 2.0. Can you make this work on remote machines also? Or just local? I have searched and cannot figure out how to use on remote. Allow me to keep this issue open so I can take another deeper look into the changes and the Vanara library. Especially the EventLog package. Can you make this work on remote machines also? Or just local? I have searched and cannot figure out how to use on remote. Please consider opening a new issue about this. As far as I know, managing remote machines' firewall rules is possible through WMI and this library was only a COM wrapper since now. Adding this feature requires a lot of abstraction and obviously new pieces of code to works well. I will keep the new issue open as a feature request. Allow me to keep this issue open so I can take another deeper look into the changes and the Vanara library. Especially the EventLog package. Can you make this work on remote machines also? Or just local? I have searched and cannot figure out how to use on remote. Please consider opening a new issue about this. As far as I know, managing remote machines' firewall rules is possible through WMI and this library was only a COM wrapper since now. Adding this feature requires a lot of abstraction and obviously new pieces of code to works well. I will keep the new issue open as a feature request. Moving to new issue
gharchive/issue
2020-09-20T19:28:21
2025-04-01T04:34:14.088718
{ "authors": [ "SCLD-AFrey", "dahall", "falahati" ], "repo": "falahati/WindowsFirewallHelper", "url": "https://github.com/falahati/WindowsFirewallHelper/issues/41", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2622047137
rollback save somewhere, probably on the server, the current running version of the app, and the previous, when the use decide to rollback, find the corresponding distfile on the remote host (if not present stop here) and then run the install project using that version and restart the services. or maybe simpler, list all availabe file on the host and let the user choose the one to rollback to (yes this) added in 738abdeabf07b0738eb5c13ecb67a509cbcb5ee7
gharchive/issue
2024-10-29T18:36:41
2025-04-01T04:34:14.090148
{ "authors": [ "Tobi-De" ], "repo": "falcopackages/fujin", "url": "https://github.com/falcopackages/fujin/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2120994999
update(falcosidekick): update README.md What type of PR is this? Uncomment one (or more) /kind <> lines: /kind bug /kind cleanup /kind design /kind documentation /kind failing-test /kind feature If this PR will release a new chart version please make sure to also uncomment the following line: /kind chart-release Any specific area of the project related to this PR? Uncomment one (or more) /area <> lines: /area falco-chart /area falco-exporter-chart /area falcosidekick-chart /area event-generator-chart /area k8s-metacollector What this PR does / why we need it: Update the README.md file for falcosidekick chart. Which issue(s) this PR fixes: Fixes # Special notes for your reviewer: Checklist [x] Chart Version bumped [x] Variables are documented in the README.md [x] CHANGELOG.md updated [APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: alacuku Once this PR has been reviewed and has the lgtm label, please assign cpanato for approval. For more information see the Kubernetes Code Review Process. The full list of commands accepted by this bot can be found here. Needs approval from an approver in each of these files: OWNERS Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment LGTM label has been added. Git tree hash: e60cb897ff25a1dc7508a58d9b5814f1fbc404be [APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alacuku, Issif The full list of commands accepted by this bot can be found here. The pull request process is described here Needs approval from an approver in each of these files: OWNERS [Issif] Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment
gharchive/pull-request
2024-02-06T14:58:52
2025-04-01T04:34:14.102991
{ "authors": [ "alacuku", "poiana" ], "repo": "falcosecurity/charts", "url": "https://github.com/falcosecurity/charts/pull/615", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1300723043
update(docs): add me to owners. What type of PR is this? /kind documentation Any specific area of the project related to this PR? /area docs What this PR does / why we need it: As privately discussed with @dwindsor, i am opening this PR to propose myself as a driverkit maintainer. In the latest 2-3 months, i have been as active as possible, bringing to driverkit: arm64 support improved CI with integration tests lots of bug fixes lots of small and not-so-small refactorings just (but imo this is the best part :D) ideas to play around I am going to publicly propose myself during the next community call, then leave this open for a public discussion, for a week or so. Which issue(s) this PR fixes: Fixes # Special notes for your reviewer: Does this PR introduce a user-facing change?: update(docs): add fededp to owners. [APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: FedeDP To complete the pull request process, please assign fntlnz after the PR has been reviewed. You can assign the PR to them by writing /assign @fntlnz in a comment when ready. The full list of commands accepted by this bot can be found here. Needs approval from an approver in each of these files: OWNERS Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment [APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: FedeDP, leodido The full list of commands accepted by this bot can be found here. The pull request process is described here Needs approval from an approver in each of these files: OWNERS [leodido] Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment LGTM label has been added. Git tree hash: 9358ae782b0a3dc9eda1307ab17436bcbb825e2f :( @leodido i talked with David; as far as i know, he tried to contact you! /unhold
gharchive/pull-request
2022-07-11T13:35:08
2025-04-01T04:34:14.113284
{ "authors": [ "FedeDP", "poiana" ], "repo": "falcosecurity/driverkit", "url": "https://github.com/falcosecurity/driverkit/pull/181", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1816015690
wip: docs: finalize the definition for Falco What type of PR is this? Uncomment one (or more) /kind <> lines: /kind bug /kind cleanup /kind design /kind user-interface /kind content /kind translation Any specific area of the project related to this PR? Uncomment one (or more) /area <> lines: /area blog /area documentation /area videos What this PR does / why we need it: See #1003 Which issue(s) this PR fixes: Fixes # Special notes for your reviewer: [APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: leogr The full list of commands accepted by this bot can be found here. The pull request process is described here Needs approval from an approver in each of these files: OWNERS [leogr] Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment [APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Issif, leogr The full list of commands accepted by this bot can be found here. The pull request process is described here Needs approval from an approver in each of these files: OWNERS [Issif,leogr] Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment
gharchive/pull-request
2023-07-21T15:38:26
2025-04-01T04:34:14.122829
{ "authors": [ "leogr", "poiana" ], "repo": "falcosecurity/falco-website", "url": "https://github.com/falcosecurity/falco-website/pull/1069", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
2559700052
fix(docs): remove references to broken -o variant What type of PR is this? Uncomment one (or more) /kind <> lines: /kind bug Any specific area of the project related to this PR? Uncomment one (or more) /area <> lines: /area blog /area documentation What this PR does / why we need it: Sadly, the -o parsing that allows you to say something like -o 'append_output[]={"match": {"source": "syscall"}, "extra_fields": ["evt.hostname"], "extra_output": "on CPU %evt.cpu"}' has a bug and I noticed after release 🤦 . I'm fixing the issue but will need to remove references to it from the docs until a patch release. Which issue(s) this PR fixes: Fixes # Special notes for your reviewer: [APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: LucaGuerra The full list of commands accepted by this bot can be found here. The pull request process is described here Needs approval from an approver in each of these files: content/OWNERS [LucaGuerra] Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment LGTM label has been added. Git tree hash: 628f1bbc5b5b06f31ceb794d6c9c48e41a406a80 [APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: leogr, LucaGuerra The full list of commands accepted by this bot can be found here. The pull request process is described here Needs approval from an approver in each of these files: content/OWNERS [LucaGuerra,leogr] Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment
gharchive/pull-request
2024-10-01T16:22:27
2025-04-01T04:34:14.131860
{ "authors": [ "LucaGuerra", "poiana" ], "repo": "falcosecurity/falco-website", "url": "https://github.com/falcosecurity/falco-website/pull/1391", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1020140465
Rule update more user known macros What type of PR is this? /kind rule-update Any specific area of the project related to this PR? /area rules What this PR does / why we need it: we use user_known_* macros extensively to adapt falco rules to our environment. however, certain rule conditions do not have such a macro. NONE This is exactly what I need to remove all the containerd Modify Shell Configuration File warnings. Thanks for your pull request. Before we can look at it, you'll need to add a 'DCO signoff' to your commits. :memo: Please follow instructions in the contributing guide to update your commits with the DCO Full details of the Developer Certificate of Origin can be found at developercertificate.org. The list of commits missing DCO signoff: bbd6c9d https://github.com/falcosecurity/falco/pull/1750#pullrequestreview-788812109 Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. [APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: kunfoo To complete the pull request process, please ask for approval from kaizhe after the PR has been reviewed. The full list of commands accepted by this bot can be found here. Needs approval from an approver in each of these files: rules/OWNERS Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment Thanks for your pull request. Before we can look at it, you'll need to add a 'DCO signoff' to your commits. :memo: Please follow instructions in the contributing guide to update your commits with the DCO Full details of the Developer Certificate of Origin can be found at developercertificate.org. The list of commits missing DCO signoff: bbd6c9d https://github.com/falcosecurity/falco/pull/1750#pullrequestreview-788812109 b7d2147 Merge branch 'rule_update_more_user_known_macros' of github.com:kunfoo/falco into rule_update_more_user_known_macros Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. @kunfoo: Adding label do-not-merge/contains-merge-commits because PR contains merge commits, which are not allowed in this repository. Use git rebase to reapply your commits on top of the target branch. Detailed instructions for doing so can be found here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. Hey @kunfoo, what this the current state of this? If we still want this, you may need to remove the merge commit and rebase your branch. That's a requirement for this repo. /milestone 0.32.0 Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/falcosecurity/community. /lifecycle stale LGTM label has been added. Git tree hash: 3ba04dbba9e29bbe079a6e2f5d7e56a46303323f /milestone 0.33.0 Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/falcosecurity/community. /lifecycle rotten Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten. Provide feedback via https://github.com/falcosecurity/community. /close @poiana: Closed this PR. In response to this: Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten. Provide feedback via https://github.com/falcosecurity/community. /close Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
gharchive/pull-request
2021-10-07T14:53:59
2025-04-01T04:34:14.152865
{ "authors": [ "jasondellaluce", "kunfoo", "leogr", "mabushey", "poiana" ], "repo": "falcosecurity/falco", "url": "https://github.com/falcosecurity/falco/pull/1750", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
265571498
Pas de migrations ?! Tu devrais avoir au moins une migration 0001_initial.py, t'as oublié de la pusher ?! La coutume c'est de pusher la/les migration ? Je pensais laisser la personne faire la migration initiale avec manage.py Ah ouais ouais ouais ! Quand tu récupères ton bousin faut juste que t'aies à faire un "manage.py migrate", mais pas le "makemigrations" :) ah d'accord, jvais regarder ça merci ! Corrigé par https://github.com/fallen/Pytition/commit/575d7ead2edc55f56445d3227ecc915ccdd76bfd Merci !
gharchive/issue
2017-10-15T13:18:26
2025-04-01T04:34:14.156160
{ "authors": [ "fallen", "jherve" ], "repo": "fallen/Pytition", "url": "https://github.com/fallen/Pytition/issues/5", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
143064764
Library of Haskell functions in ES2015 (similar to this library) I recently published a JS library with some similar functionality to this library, if not as comprehensive, and in ES2015. So it's still experimental and incomplete, but it may be of interest to people: maryamyriameliamurphies.js Thank you. It looks interesting, and worth a look, and I'm going to blame you if I forever more hear the word 'Applicative' in an Irish brogue. JFTR, this is not a library, but a specification, although there are a number of related libraries nearby. "Blame" or "thank"? 🍀 Closing this as I'm not entirely sure what to do with this? :sheep: I'd like to think we could add it to the implementations file. but i"m not exactly clear on the api the data types provide. I'd also like if we could come together on a similar api for most things. @sjsyrek thoughts? @joneshf I wrote maryamyriameliamurphies to imitate Haskell functions, so the API is really just a bunch of functions (listed in the README). I would definitely like to make the documentation clearer, though, if it's confusing. Maybe what's strange in a JS context is that I'm not exporting the data types themselves but only the operations over them? In that sense, my API could be incompatible with the Fantasy Land spec, since I'm adhering to the Haskell way of doing things and not just implementing algebraic data types in general.
gharchive/issue
2016-03-23T20:00:33
2025-04-01T04:34:14.182014
{ "authors": [ "CrossEye", "SimonRichardson", "joneshf", "sjsyrek" ], "repo": "fantasyland/fantasy-land", "url": "https://github.com/fantasyland/fantasy-land/issues/129", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
523295900
Gui dev - начальная конфигурация Это очень круто. И выглядит тоже круто. Получается все настройки выключены по умолчанию. Какой-нибудь начальный пресет в виде ини-файлика или типа того, намечается? Скорее всего все будет выключено по умолчанию. Человек настраивает галочки и сам выгружает пресет. Как-то так. Осталось лишь дожить до дня, когда хотя бы макет доделается. А то после вчерашнего он сыплет ошибки...
gharchive/pull-request
2019-11-15T07:09:30
2025-04-01T04:34:14.183455
{ "authors": [ "farag2", "scriptingstudio" ], "repo": "farag2/Windows-10-Setup-Script", "url": "https://github.com/farag2/Windows-10-Setup-Script/pull/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
237765896
错误的json请求也有返回值 随便填写一个错误的json,发起请求,有正常的返回值。 错误的json如:{"query":{"match2_all":{}}} 返回值同{"query":{"match_all":{}}}一致。 期望能原样返回es给出的错误,如:{"error":{"root_cause":[{"type":"parsing_exception","reason":"no [query] registered for [match2_all]","line":4,"col":23}],"type":"parsing_exception","reason":"no [query] registered for [match2_all]","line":4,"col":23},"status":400} hi! 你选择post 查询方式, get f方式默认不带textarea 里面的搜索语句的 get不带怎么search呢?search请求就是get的。并且要带body的。。。因为查询就是很复杂的~ 目前 http请求get是不带body体的 跟ES的逻辑不一致了~ Kibana是支持的~ 那下个版本我修改一下。这个版本先选择post 方式查询吧, 我是保持了和head 插件一致。 欢迎补充提出问题。。。。共同改进
gharchive/issue
2017-06-22T08:17:14
2025-04-01T04:34:14.196069
{ "authors": [ "farmerx", "wclssdn" ], "repo": "farmerx/ElasticHD", "url": "https://github.com/farmerx/ElasticHD/issues/11", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1848967662
Add agent field Description The Agent field provides information about who made the respective log entry. This PR adds that field to the data we ingest from the Log Entries API Type of change [ ] Bug fix [x] New feature [ ] Breaking change Related issues nil Migration notes nil Extra info nil I have read the CLA Document and I hereby sign the CLA recheck FAIL test/pagerduty.test.ts ● Test suite failed to run test/pagerduty.test.ts:27:11 - error TS2741: Property 'agent' is missing in type '{ id: string; type: string; summary: string; self: string; html_url: string; created_at: string; incident: { id: string; type: string; summary: string; self: string; html_url: string; }; service: { id: string; type: string; summary: string; self: string; html_url: string; }; }' but required in type 'LogEntry'. 27 const logEntry: LogEntry = { ~~~~~~~~ src/pagerduty.ts:79:12 79 readonly agent: PagerdutyObject; ~~~~~ 'agent' is declared here. https://github.com/faros-ai/airbyte-connectors/actions/runs/5861517768/job/15898371992?pr=1110#step:6:1787 Nice one @patrobinson 👏 Is this one good to go @tovbinm?
gharchive/pull-request
2023-08-14T04:18:04
2025-04-01T04:34:14.200348
{ "authors": [ "catkins", "patrobinson", "tovbinm" ], "repo": "faros-ai/airbyte-connectors", "url": "https://github.com/faros-ai/airbyte-connectors/pull/1110", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1397758693
实现自研组件,解除第三方依赖 目前第三方组件,只支持简单的结构转换,不支持复合类型转换。 通知在DTO与DO之间转换时,DTO一般是扁平结构,DO是复合结构。需要支持互转。 如: type TaskDO struct { Id int Client ClientVO Status eumTaskType.Enum Data collections.Dictionary[string, string] } type TaskDTO struct { Id int ClientId int64 ClientIp string ClientName string Status eumTaskType.Enum Data collections.Dictionary[string, string] } type ClientVO struct { Id int64 Ip string Name string } 可以看到,TaskDO是复合结构,包含Client字段,ClientVO是结构类型。 TaskDTO中包含ClientId、ClientIp、ClientName 三个基本类型的字段。在转换时,能分别对应: TaskDO.Client.Id 对应 TaskDTO.ClientId TaskDO.Client.Ip 对应 TaskDTO.ClientIp TaskDO.Client.Name 对应 TaskDTO.ClientName 此外,除了支持单个对象的转换外,还需要支持数组转换 现在没有这个功能的支持,导致需要手动赋值 func (repository taskGroupRepository) toListTaskEO(lstPO collections.List[model.TaskPO]) collections.List[vo.TaskEO] { var lst collections.List[vo.TaskEO] lstPO.Select(&lst, func(item model.TaskPO) any { eo := mapper.Single[vo.TaskEO](&item) eo.Client.Id = item.ClientId eo.Client.Ip = item.ClientIp eo.Client.Name = item.ClientName return eo }) return lst } nice, @gelin9527 finish
gharchive/issue
2022-10-05T13:02:09
2025-04-01T04:34:14.203476
{ "authors": [ "steden" ], "repo": "farseer-go/mapper", "url": "https://github.com/farseer-go/mapper/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1429901950
值为 0 时,表单没有赋值 值为 0 时,表单没有赋值 fs-form 组件的这里导致的
gharchive/issue
2022-10-31T14:02:23
2025-04-01T04:34:14.207631
{ "authors": [ "lidongcheng88" ], "repo": "fast-crud/fast-crud", "url": "https://github.com/fast-crud/fast-crud/issues/108", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
669755248
AttributeError: type object 'Tensor' has no attribute 'as_subclass' i am having trouble using fastaiv2 , the problem is in line 240 of torch_core.py file _ _ if not hasattr(torch,'as_subclass'): setattr(torch, 'as_subclass', torch.Tensor.as_subclass) Do you have PyTorch 1.6 installed? torch.Tensor.as_subclass is introdudced in 1.6 (https://github.com/pytorch/pytorch/releases). The latest fastai2 requires Pytroch requires 1.6 see forum post. Do you have PyTorch 1.6 installed? torch.Tensor.as_subclass is introduced in 1.6 (https://github.com/pytorch/pytorch/releases). The latest fastai2 requires Pytroch requires 1.6 see forum post. Oh thanks worked ! didnot know that !
gharchive/issue
2020-07-31T12:15:33
2025-04-01T04:34:14.220380
{ "authors": [ "davanstrien", "xettrisomeman" ], "repo": "fastai/fastai2", "url": "https://github.com/fastai/fastai2/issues/444", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
591381246
fixes #228 and adds comment at end of notebook I basically copied some sections from fastbook chapter 11, I hope that is not a problem 😅 Thanks! Though I will rework all of that in the process of rewriting it. You should rather check done tutorials ;) Alright! So I should focus on tutorials that have a green checkmark here? Yup
gharchive/pull-request
2020-03-31T19:53:07
2025-04-01T04:34:14.222275
{ "authors": [ "lgvaz", "sgugger" ], "repo": "fastai/fastai2", "url": "https://github.com/fastai/fastai2/pull/229", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1603290662
No type hints for id in /user routes Maybe a feature request, not a bug. FastAPI-Users routes look like: @router.get( "/{id}", response_model=user_schema, dependencies=[Depends(get_current_superuser)], name="users:user", responses={ status.HTTP_401_UNAUTHORIZED: { "description": "Missing token or inactive user.", }, status.HTTP_403_FORBIDDEN: { "description": "Not a superuser.", }, status.HTTP_404_NOT_FOUND: { "description": "The user does not exist.", }, }, ) async def get_user(user=Depends(get_user_or_404)): return user_schema.from_orm(user) Where get_user_or_404 is: async def get_user_or_404( id: Any, user_manager: BaseUserManager[models.UP, models.ID] = Depends(get_user_manager), ) -> models.UP: try: parsed_id = user_manager.parse_id(id) return await user_manager.get(parsed_id) except (exceptions.UserNotExists, exceptions.InvalidID) as e: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND) from e I understand, of course, why id is typed as Any, but the inability to set a type for the id breaks various consumers of the OpenAPI spec (for instance, fuzzers like schemathesis). It would be super nice to parameterize the dependency such that when the ORM is initialized the type can be passed to this dependency. Hmm, I see. Could it be acceptable to define it as a string? Since it comes from the URL, it's quite sensible to always consider as a string. Then, it's passed to the parser that'll transform it to the right type. It would be acceptable to define it as a string, for me anyway
gharchive/issue
2023-02-28T15:29:16
2025-04-01T04:34:14.225502
{ "authors": [ "frankie567", "gegnew" ], "repo": "fastapi-users/fastapi-users", "url": "https://github.com/fastapi-users/fastapi-users/issues/1166", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
920377179
could we have some clarification on prefixes please ? the 1R; , 0X; xx; seems like a very solid module, good job! Documentation is on my list (along with a couple of other minor items), but I've got to find the time to do it. As far as the topic prefixes go, they are loosely-matched 3-character values used to specify MQTT QoS and retain settings as follows: 1st character: Determines the QoS level at which to subscribe to the corresponding topic. Use 0 for QoS0 or 1 for QoS1. Use any other character (I recommend x) if you do not wish to subscribe to the topic. (This is useful for topics that you only need to publish to). MQTT QoS is explained here. Note that SimplMQTT does not support QoS2. 2nd character: Set to R (or r) to enable message retention for the corresponding topic. Use any other character (I recommend x) to disable message retention. MQTT message retention is explained here. 3rd character: This is just a separator between the MQTT settings and actual Topic field. It can be any character, but I recommend (and use) a semicolon. Examples: 0R;awesome_topic = subscribe to awesome_topic at QoS0 and enable message retention for publishing 1x;another_topic = subscribe to another_topic at QoS1 with no message retention for publishing xR;publish_me = do not subscribe to the publish_me topic, enable message retention for publishing Hope that helps!
gharchive/issue
2021-06-14T12:46:45
2025-04-01T04:34:14.231594
{ "authors": [ "PeterScream", "fasteddy516" ], "repo": "fasteddy516/SimplMQTT", "url": "https://github.com/fasteddy516/SimplMQTT/issues/12", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2652008427
chore: node test migration and ts fix Checklist [ ] run npm run test and npm run benchmark [ ] tests and/or benchmarks are included [ ] documentation is changed or added [ ] commit message and code follows the Developer's Certification of Origin and the Code of conduct I think .taprc needs to be removed as well? yep done oh well, not sure why tests are passing now 🤷
gharchive/pull-request
2024-11-12T12:10:37
2025-04-01T04:34:14.425275
{ "authors": [ "simoneb" ], "repo": "fastify/one-line-logger", "url": "https://github.com/fastify/one-line-logger/pull/51", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
252885684
Wrong documentation of "latest_testflight_build_number" action New Issue Checklist [x] Updated fastlane to the latest version [x] I read the Contribution Guidelines [x] I read docs.fastlane.tools [x] I searched for existing GitHub issues Issue Description The documentation about the action `` contains a mistake for the return value type. It states that the return value is an Integer: +--------------------------------------------------------------------------+ | latest_testflight_build_number Return Value | +--------------------------------------------------------------------------+ | Integer representation of the latest build number uploaded to TestFlight | +--------------------------------------------------------------------------+ While the return value is actually a String. Environment 🚫 fastlane environment 🚫 Stack Key Value OS 10.12.6 Ruby 2.3.2 Bundler? false Git git version 2.13.5 (Apple Git-94) Installation Source ~/.rbenv/versions/2.3.2/bin/fastlane Host Mac OS X 10.12.6 (16G29) Ruby Lib Dir ~/.rbenv/versions/2.3.2/lib OpenSSL Version OpenSSL 1.0.2j 26 Sep 2016 Is contained false Is homebrew false Is installed via Fabric.app false Xcode Path ~/Downloads/Xcode-beta.app/Contents/Developer/ Xcode Version 9.0 System Locale Error No Locale with UTF8 found 🚫 fastlane gems Gem Version Update-Status fastlane 2.54.1 ✅ Up-To-Date Loaded fastlane plugins: Plugin Version Update-Status fastlane-plugin-badge 0.8.2 ✅ Up-To-Date Loaded gems Gem Version did_you_mean 1.0.0 slack-notifier 1.5.1 CFPropertyList 2.3.5 claide 1.0.2 colored2 3.1.2 nanaimo 0.2.3 xcodeproj 1.5.1 rouge 2.0.7 xcpretty 0.2.8 terminal-notifier 1.8.0 multipart-post 2.0.0 word_wrap 1.0.0 tty-screen 0.5.0 babosa 1.0.2 colored 1.2 highline 1.7.8 commander-fastlane 4.4.5 excon 0.58.0 unf_ext 0.0.7.4 unf 0.1.4 domain_name 0.5.20170404 http-cookie 1.0.3 faraday-cookie_jar 0.0.6 fastimage 2.1.0 gh_inspector 1.0.3 mini_magick 4.5.1 multi_json 1.12.1 multi_xml 0.6.0 rubyzip 1.2.1 security 0.1.3 xcpretty-travis-formatter 0.0.4 dotenv 2.2.1 bundler 1.13.6 faraday_middleware 0.12.2 json 2.1.0 io-console 0.4.5 plist 3.3.0 faraday 0.13.1 little-plugger 1.1.4 logging 2.2.2 jwt 1.5.6 memoist 0.16.0 os 0.9.6 signet 0.7.3 googleauth 0.5.3 uber 0.1.0 declarative 0.0.9 declarative-option 0.1.0 representable 3.0.4 retriable 3.1.1 mime-types-data 3.2016.0521 mime-types 3.1 httpclient 2.8.3 google-api-client 0.13.1 unicode-display_width 1.3.0 terminal-table 1.8.0 curb 0.9.3 badge 0.8.4 fastlane-plugin-badge 0.8.2 generated on: 2017-08-25 @kevin-hirsch Thanks for the heads up on this! Any chance you might be interested in submitting a patch to fix this? We are always looking for awesome changes submitted by our contributors, and this would be an impactful change to help folks use the latest_testflight_build_number action. Thanks again! 🚀
gharchive/issue
2017-08-25T12:25:38
2025-04-01T04:34:14.453313
{ "authors": [ "kevin-hirsch", "mpirri" ], "repo": "fastlane/fastlane", "url": "https://github.com/fastlane/fastlane/issues/10150", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
262772127
latest_testflight_build_number doesn't do what's described New Issue Checklist [*] Updated fastlane to the latest version [*] I read the Contribution Guidelines [*] I read docs.fastlane.tools [*] I searched for existing GitHub issues Issue Description According to latest_testflight_build_number docs: "Fetches most recent build number from TestFlight" and "Provides a way to have increment_build_number be based on the latest build you uploaded to iTC." However, it only seems to return the latest build number for the latest app version. I have two versions of the app: 1.0.0 and 1.0.1. Version 1.0.1 has two builds: 34 and 35. Version 1.0.0 has builds 0...33 and also 36 and 37. Here's what latest_testflight_build_number prints out: Fetching the latest build number for version 1.0.1 Latest upload is build number: 35 Expected value: 37 ✅ fastlane environment ✅ Stack Key Value OS 10.12.6 Ruby 2.2.4 Bundler? false Git git version 2.12.2 Installation Source ~/.fastlane/bin/bundle/bin/fastlane Host Mac OS X 10.12.6 (16G29) Ruby Lib Dir ~/.fastlane/bin/bundle/lib OpenSSL Version OpenSSL 1.0.2g 1 Mar 2016 Is contained false Is homebrew true Is installed via Fabric.app false Xcode Path /Applications/Xcode.app/Contents/Developer/ Xcode Version 9.0 System Locale Variable Value LANG en_US.UTF-8 ✅ LC_ALL en_US.UTF-8 ✅ LANGUAGE en_US.UTF-8 ✅ fastlane files: `./fastlane/Fastfile` # Customise this file, documentation can be found here: # https://github.com/fastlane/fastlane/tree/master/fastlane/docs # All available actions: https://docs.fastlane.tools/actions # can also be listed using the `fastlane actions` command # Change the syntax highlighting to Ruby # All lines starting with a # are ignored when running `fastlane` # If you want to automatically update fastlane if a new version is available: # update_fastlane # This is the minimum version number required. # Update this, if you use features of a newer version fastlane_version "2.23.0" default_platform :ios platform :ios do before_all do ENV["SLACK_URL"] = "https://hooks.slack.com/services/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" end desc "Runs linting (and eventually static analysis)" lane :analyze do xcodebuild( workspace: "RaffleHunt.xcworkspace", scheme: "RaffleHunt", configuration: "Debug", sdk: 'iphonesimulator', destination: 'platform=iOS Simulator,OS=9.3,name=iPhone 6', analyze: true ) end desc "Runs all the tests" lane :test do scan( scheme: "RaffleHunt", slack_channel: "@andreygordeev" ) end desc "Submit a new Beta Build to Apple TestFlight" desc "This will also make sure the profile is up to date" lane :beta do |options| ensure_git_status_clean if not options[:skipScan] scan( scheme: "RaffleHunt", slack_channel: "@andreygordeev" ) end # match(type: "appstore") # more information: https://codesigning.guide increment_build_number({ build_number: latest_testflight_build_number + 1 }) gym(scheme: "RaffleHunt", silent: true, output_directory: "./build") # Build your app - more options available pilot(changelog: options[:whatsnew], beta_app_description: options[:whatsnew], beta_app_feedback_email: "rafflehunt@andreygordeev.com") notes = changelog_from_git_commits( pretty: "- %s",# Optional, lets you provide a custom format to apply to each commit when generating the changelog text date_format: "short",# Optional, lets you provide an additional date format to dates within the pretty-formatted string match_lightweight_tag: false, # Optional, lets you ignore lightweight (non-annotated) tags when searching for the last tag merge_commit_filtering: "exclude_merges" # Optional, lets you filter out merge commits ) slack( username: "ios-bot", message: "A new beta build *#" + get_build_number + "* available.\nWhat's new: *" + options[:whatsnew] + "*\nChangelog:\n" + (notes || "No commits made yet"), success: true, default_payloads: [] ) commit_version_bump(force: true) add_git_tag push_to_git_remote # sh "your_script.sh" # You can also use other beta testing services here (run `fastlane actions`) end desc "Deploy a new version to the App Store" lane :release do # match(type: "appstore") # snapshot ensure_git_status_clean increment_build_number({ build_number: latest_testflight_build_number + 1 }) gym(scheme: "RaffleHunt", silent: true, output_directory: "./build") # Build your app - more options available deliver(force: true) commit_version_bump(force: true) add_git_tag push_to_git_remote # frameit end # You can define as many lanes as you want after_all do |lane| # This block is called, only if the executed lane was successful # slack( # message: "Successfully deployed new App Update." # ) end error do |lane, exception| slack( username: "ios-bot", channel: "@andreygordeev", message: "❗" + exception.message, success: false, default_payloads: [] ) end end # More information about multiple platforms in fastlane: https://github.com/fastlane/fastlane/blob/master/fastlane/docs/Platforms.md # All available actions: https://docs.fastlane.tools/actions # fastlane reports which actions are used # No personal data is recorded. Learn more at https://github.com/fastlane/enhancer `./fastlane/Appfile` app_identifier "com.purplevolt.rafflehunt" # The bundle identifier of your app apple_id "rafflehunt@andreygordeev.com" # Your Apple email address team_id "362B5635AF" # Developer Portal Team ID # you can even provide different app identifiers, Apple IDs and team names per lane: # More information: https://github.com/fastlane/fastlane/blob/master/fastlane/docs/Appfile.md fastlane gems Gem Version Update-Status fastlane 2.60.1 ✅ Up-To-Date Loaded fastlane plugins: No plugins Loaded Loaded gems Gem Version slack-notifier 1.5.1 CFPropertyList 2.3.5 claide 1.0.2 colored2 3.1.2 nanaimo 0.2.3 xcodeproj 1.5.1 rouge 1.11.1 xcpretty 0.2.6 terminal-notifier 1.7.1 unicode-display_width 1.1.3 terminal-table 1.7.3 plist 3.2.0 public_suffix 2.0.5 addressable 2.5.1 multipart-post 2.0.0 word_wrap 1.0.0 tty-screen 0.5.0 babosa 1.0.2 colored 1.2 highline 1.7.8 commander-fastlane 4.4.5 excon 0.55.0 faraday 0.12.1 unf_ext 0.0.7.4 unf 0.1.4 domain_name 0.5.20170404 http-cookie 1.0.3 faraday-cookie_jar 0.0.6 fastimage 2.1.0 gh_inspector 1.0.3 json 1.8.1 mini_magick 4.5.1 multi_json 1.12.1 multi_xml 0.6.0 rubyzip 1.2.1 security 0.1.3 xcpretty-travis-formatter 0.0.4 dotenv 2.2.0 bundler 1.14.6 faraday_middleware 0.11.0.1 uber 0.0.15 declarative 0.0.9 declarative-option 0.1.0 representable 3.0.4 retriable 2.1.0 mime-types-data 3.2016.0521 mime-types 3.1 little-plugger 1.1.4 logging 2.2.2 jwt 1.5.6 memoist 0.15.0 os 0.9.6 signet 0.7.3 googleauth 0.5.1 httpclient 2.8.3 google-api-client 0.13.5 generated on: 2017-10-04 Are you passing a version parameter to latest_testflight_build_number? build_number = latest_testflight_build_number( app_identifier: CredentialsManager::AppfileConfig.try_fetch_value(:app_identifier), version: get_version_number(xcodeproj: project) ) + 1 That should find the last build number for the given version. I'm not passing a version parameter. I want to get the latest build number for all versions. It's a common case when your app is rejected and you need to submit another 1.0.0 build, but you've already sent 1.0.1 builds I confirm that there something wrong with latest_testflight_build_number, it always returns default value. No matter which 'version' to pass as parameter. Updated fastlane to latest 2.64.1 and fastlane-plugin-versioning to latest 0.3.1 Yeah I'm stuck with this problem again :/ Looks like the latest version of latest_testflight_build_number always asks to input the app version and rreturns 1 for any version you input. Same for me. We're also experiencing something similar, where the output of latest_testflight_build_number usually just returns 1. Okay, after upgrading to fastlane 2.66.2 the behaviour has changed, and it seemingly now reports the correct build number. Verified as well that 2.66.2 fixes it Thanks everybody for confirming 👍 Same issue happening with 2.68.2 :(
gharchive/issue
2017-10-04T12:41:15
2025-04-01T04:34:14.489508
{ "authors": [ "KrauseFx", "NachoSoto", "NiltiakSivad", "agordeev", "arn8tas", "mrgerh", "vrutberg" ], "repo": "fastlane/fastlane", "url": "https://github.com/fastlane/fastlane/issues/10496", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
607664900
fastlane pilot upload error when trying to uploading the IPA file to the App Store New Issue Checklist [x] Updated fastlane to the latest version [x] I read the Contribution Guidelines [x] I read docs.fastlane.tools [x] I searched for existing GitHub issues Issue Description I was using Azure DevOps's Apple App Store Release pipeline to upload IPA file to the App Store, and running into the following error when executing "fastlane pilot upload" command. 2020-04-27T15:44:59.2852120Z /usr/local/lib/ruby/site_ruby/2.6.0/rubygems/core_ext/kernel_require.rb:175:in ensure in require': CRITICAL: RUBYGEMS_ACTIVATION_MONITOR.owned?: before false -> after true (RuntimeError) 2020-04-27T15:44:59.2855050Z from /usr/local/lib/ruby/site_ruby/2.6.0/rubygems/core_ext/kernel_require.rb:175:in require' 2020-04-27T15:44:59.2856870Z from /usr/local/lib/ruby/site_ruby/2.6.0/rubygems.rb:230:in finish_resolve' 2020-04-27T15:44:59.2858720Z from /usr/local/lib/ruby/site_ruby/2.6.0/rubygems.rb:297:in block in activate_bin_path' 2020-04-27T15:44:59.2860400Z from /usr/local/lib/ruby/site_ruby/2.6.0/rubygems.rb:295:in synchronize' 2020-04-27T15:44:59.2862020Z from /usr/local/lib/ruby/site_ruby/2.6.0/rubygems.rb:295:in activate_bin_path' 2020-04-27T15:44:59.2863480Z from /usr/local/bin/fastlane:23:in `' 2020-04-27T15:44:59.3050400Z ##[error]Error: fastlane failed with return code: 1 Command executed fastlane pilot upload -u *** -i /Users/user/Downloads/vsts-agent-osx-x64-2.165.2/_work/r2/a/TestApp iOS/drop/TestApp/TestAppiOS.ipa -a com.ipathsystem.TestApp Complete output when running fastlane, including the stack trace and command used 2020-04-27T15:44:14.0182700Z ##[section]Starting: Publish TestApp to the App Store TestFlight track 2020-04-27T15:44:14.0195280Z ============================================================================== 2020-04-27T15:44:14.0195710Z Task : Apple App Store Release 2020-04-27T15:44:14.0196090Z Description : Release an app to TestFlight or the Apple App Store 2020-04-27T15:44:14.0196440Z Version : 1.158.0 2020-04-27T15:44:14.0196730Z Author : Microsoft Corporation 2020-04-27T15:44:14.0198110Z Help : More Information 2020-04-27T15:44:14.0198670Z ============================================================================== 2020-04-27T15:44:14.6885030Z c01c02bc-af94-408f-ac90-e6c3c5a5d22b exists true 2020-04-27T15:44:14.7074530Z [command]/usr/local/opt/ruby/bin/gem install fastlane 2020-04-27T15:44:14.7565140Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7566920Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7570030Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7572420Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7574910Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7577280Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7579440Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7582310Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7585000Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7590110Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7593670Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7596080Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7599540Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7600910Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7602020Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7603190Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7604410Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7605710Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7685300Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7688110Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7689700Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7691020Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7692370Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7693690Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7694920Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7696070Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7697210Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7698370Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:14.7699420Z (node:8951) Warning: Use Cipheriv for counter mode of aes-256-ctr 2020-04-27T15:44:22.0417070Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:22.0419330Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/rouge-2.0.7.gemspec:17. 2020-04-27T15:44:28.8162880Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:28.8166540Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/excon-0.59.0.gemspec:19. 2020-04-27T15:44:29.2221030Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:29.2222410Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/highline-1.7.8.gemspec:20. 2020-04-27T15:44:29.2242570Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:29.2243850Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/little-plugger-1.1.4.gemspec:18. 2020-04-27T15:44:29.2247960Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:29.2249230Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/logging-2.2.2.gemspec:18. 2020-04-27T15:44:39.4912580Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:39.4913890Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/excon-0.59.0.gemspec:19. 2020-04-27T15:44:39.4915120Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:39.4916230Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/highline-1.7.8.gemspec:20. 2020-04-27T15:44:40.2797780Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:40.2799210Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/rouge-2.0.7.gemspec:17. 2020-04-27T15:44:41.6613120Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:41.6615090Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/little-plugger-1.1.4.gemspec:18. 2020-04-27T15:44:41.6616710Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:41.6617860Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/little-plugger-1.1.4.gemspec:18. 2020-04-27T15:44:41.6620990Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:41.6622450Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/little-plugger-1.1.4.gemspec:18. 2020-04-27T15:44:41.6624590Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:41.6625810Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/little-plugger-1.1.4.gemspec:18. 2020-04-27T15:44:41.6630490Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:41.6631930Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/little-plugger-1.1.4.gemspec:18. 2020-04-27T15:44:41.6639840Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:41.6642520Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/little-plugger-1.1.4.gemspec:18. 2020-04-27T15:44:41.6644760Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:41.6647020Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/logging-2.2.2.gemspec:18. 2020-04-27T15:44:41.6653220Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:41.6657390Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/little-plugger-1.1.4.gemspec:18. 2020-04-27T15:44:41.6690800Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:41.6692280Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/logging-2.2.2.gemspec:18. 2020-04-27T15:44:41.6694400Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:41.6696530Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/logging-2.2.2.gemspec:18. 2020-04-27T15:44:41.6702880Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:41.6704180Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/logging-2.2.2.gemspec:18. 2020-04-27T15:44:41.6707780Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:41.6709030Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/logging-2.2.2.gemspec:18. 2020-04-27T15:44:41.6713600Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:41.6714940Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/logging-2.2.2.gemspec:18. 2020-04-27T15:44:42.1998960Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:42.2000320Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/multipart-post-2.0.0.gemspec:17. 2020-04-27T15:44:44.9321840Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:44.9323150Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/rouge-2.0.7.gemspec:17. 2020-04-27T15:44:53.9013200Z Successfully installed fastlane-2.146.1 2020-04-27T15:44:53.9014650Z Parsing documentation for fastlane-2.146.1 2020-04-27T15:44:53.9015370Z Done installing documentation for fastlane after 9 seconds 2020-04-27T15:44:53.9015940Z 1 gem installed 2020-04-27T15:44:54.0372200Z [command]/usr/local/opt/ruby/bin/gem update fastlane -i /Users/user/.gem-cache 2020-04-27T15:44:54.3568410Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:54.3570000Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/rouge-2.0.7.gemspec:17. 2020-04-27T15:44:54.4235150Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:54.4236510Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/excon-0.59.0.gemspec:19. 2020-04-27T15:44:54.4730110Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:54.4732300Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/highline-1.7.8.gemspec:20. 2020-04-27T15:44:54.4753830Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:54.4755370Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/little-plugger-1.1.4.gemspec:18. 2020-04-27T15:44:54.4759350Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:54.4761260Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/logging-2.2.2.gemspec:18. 2020-04-27T15:44:54.9982490Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:54.9983810Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/rouge-2.0.7.gemspec:17. 2020-04-27T15:44:55.0731160Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:55.0733440Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/excon-0.59.0.gemspec:19. 2020-04-27T15:44:55.1290040Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:55.1291390Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/highline-1.7.8.gemspec:20. 2020-04-27T15:44:55.1310670Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:55.1311900Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/little-plugger-1.1.4.gemspec:18. 2020-04-27T15:44:55.1314800Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:55.1315960Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/logging-2.2.2.gemspec:18. 2020-04-27T15:44:58.5394070Z Updating installed gems 2020-04-27T15:44:58.5394500Z Nothing to update 2020-04-27T15:44:58.5728930Z [command]fastlane pilot upload -u *** -i /Users/user/Downloads/vsts-agent-osx-x64-2.165.2/_work/r2/a/TestApp iOS/drop/TestApp/TestAppiOS.ipa -a com.ipathsystem.TestApp 2020-04-27T15:44:58.7945160Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:58.7946770Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/rouge-2.0.7.gemspec:17. 2020-04-27T15:44:58.8204740Z NOTE: Gem::Specification#rubyforge_project= is deprecated with no replacement. It will be removed on or after 2019-12-01. 2020-04-27T15:44:58.8207900Z Gem::Specification#rubyforge_project= called from /Users/user/.gem-cache/specifications/highline-1.7.8.gemspec:20. 2020-04-27T15:44:59.2852120Z /usr/local/lib/ruby/site_ruby/2.6.0/rubygems/core_ext/kernel_require.rb:175:in ensure in require': CRITICAL: RUBYGEMS_ACTIVATION_MONITOR.owned?: before false -> after true (RuntimeError) 2020-04-27T15:44:59.2855050Z from /usr/local/lib/ruby/site_ruby/2.6.0/rubygems/core_ext/kernel_require.rb:175:in require' 2020-04-27T15:44:59.2856870Z from /usr/local/lib/ruby/site_ruby/2.6.0/rubygems.rb:230:in finish_resolve' 2020-04-27T15:44:59.2858720Z from /usr/local/lib/ruby/site_ruby/2.6.0/rubygems.rb:297:in block in activate_bin_path' 2020-04-27T15:44:59.2860400Z from /usr/local/lib/ruby/site_ruby/2.6.0/rubygems.rb:295:in synchronize' 2020-04-27T15:44:59.2862020Z from /usr/local/lib/ruby/site_ruby/2.6.0/rubygems.rb:295:in activate_bin_path' 2020-04-27T15:44:59.2863480Z from /usr/local/bin/fastlane:23:in `' 2020-04-27T15:44:59.3050400Z ##[error]Error: fastlane failed with return code: 1 2020-04-27T15:44:59.3112910Z ##[section]Finishing: Publish TestApp to the App Store TestFlight track Environment MacMini2014:vsts-agent-osx-x64-2.133.3 user$ fastlane env [✔] 🚀 [08:32:33]: fastlane detected a Gemfile in the current directory [08:32:33]: However, it seems like you didn't use bundle exec [08:32:33]: To launch fastlane faster, please use [08:32:33]: [08:32:33]: $ bundle exec fastlane env [08:32:33]: [08:32:33]: Get started using a Gemfile for fastlane https://docs.fastlane.tools/getting-started/ios/setup/#use-a-gemfile [08:35:13]: Generating fastlane environment output, this might take a few seconds... ✅ fastlane environment ✅ Stack | Key | Value | | --------------------------- | ------------------------------------------- | | OS | 10.15.4 | | Ruby | 2.6.5 | | Bundler? | false | | Git | git version 2.24.2 (Apple Git-127) | | Installation Source | /usr/local/bin/fastlane | | Host | Mac OS X 10.15.4 (19E266) | | Ruby Lib Dir | /usr/local/Cellar/ruby/2.6.5/lib | | OpenSSL Version | OpenSSL 1.1.1d 10 Sep 2019 | | Is contained | false | | Is homebrew | false | | Is installed via Fabric.app | false | | Xcode Path | /Applications/Xcode.app/Contents/Developer/ | | Xcode Version | 11.4.1 | System Locale | Variable | Value | | | -------- | ----------- | - | | LANG | en_US.UTF-8 | ✅ | | LC_ALL | | | | LANGUAGE | | | fastlane files: No Fastfile found No Appfile found fastlane gems | Gem | Version | Update-Status | | -------- | ------- | ------------- | | fastlane | 2.146.1 | ✅ Up-To-Date | Loaded fastlane plugins: No plugins Loaded Loaded gems | Gem | Version | | ------------------------- | ------------ | | did_you_mean | 1.3.0 | | slack-notifier | 2.3.2 | | atomos | 0.1.3 | | CFPropertyList | 3.0.2 | | claide | 1.0.3 | | colored2 | 3.1.2 | | nanaimo | 0.2.6 | | xcodeproj | 1.16.0 | | rouge | 2.0.7 | | xcpretty | 0.3.0 | | terminal-notifier | 2.0.0 | | unicode-display_width | 1.7.0 | | terminal-table | 1.8.0 | | plist | 3.5.0 | | public_suffix | 2.0.5 | | addressable | 2.7.0 | | multipart-post | 2.0.0 | | word_wrap | 1.0.0 | | tty-screen | 0.7.1 | | tty-cursor | 0.7.1 | | tty-spinner | 0.9.3 | | babosa | 1.0.3 | | colored | 1.2 | | highline | 1.7.10 | | commander-fastlane | 4.4.6 | | excon | 0.73.0 | | faraday | 0.17.3 | | unf_ext | 0.0.7.7 | | unf | 0.1.4 | | domain_name | 0.5.20190701 | | http-cookie | 1.0.3 | | faraday-cookie_jar | 0.0.6 | | faraday_middleware | 0.13.1 | | fastimage | 2.1.7 | | gh_inspector | 1.1.3 | | json | 2.3.0 | | mini_magick | 4.10.1 | | multi_xml | 0.6.0 | | rubyzip | 1.3.0 | | security | 0.1.3 | | xcpretty-travis-formatter | 1.0.0 | | dotenv | 2.7.5 | | naturally | 2.2.0 | | simctl | 1.6.8 | | jwt | 2.1.0 | | uber | 0.1.0 | | declarative | 0.0.10 | | declarative-option | 0.1.0 | | representable | 3.0.4 | | retriable | 3.1.2 | | mini_mime | 1.0.2 | | multi_json | 1.14.1 | | signet | 0.14.0 | | memoist | 0.16.2 | | os | 1.1.0 | | googleauth | 0.12.0 | | httpclient | 2.8.3 | | google-api-client | 0.36.4 | | google-cloud-env | 1.3.1 | | google-cloud-errors | 1.0.0 | | google-cloud-core | 1.5.0 | | digest-crc | 0.5.1 | | google-cloud-storage | 1.26.0 | | emoji_regex | 1.0.1 | | jmespath | 1.4.0 | | aws-partitions | 1.303.0 | | aws-eventstream | 1.1.0 | | aws-sigv4 | 1.1.2 | | aws-sdk-core | 3.94.0 | | aws-sdk-kms | 1.30.0 | | aws-sdk-s3 | 1.63.0 | | bundler | 2.1.4 | generated on: 2020-04-27 [08:35:22]: Take notice that this output may contain sensitive information, or simply information that you don't want to make public. [08:35:22]: 🙄 Wow, that's a lot of markdown text... should fastlane put it into your clipboard, so you can easily paste it on GitHub? (y/n) y [08:37:52]: Successfully copied markdown into your clipboard 🎨 [08:37:52]: Open https://github.com/fastlane/fastlane/issues/new to submit a new issue ✅ Hi @xplatform Were you ever able to find a solution to this issue?
gharchive/issue
2020-04-27T16:00:40
2025-04-01T04:34:14.558238
{ "authors": [ "miscampbell", "xplatform" ], "repo": "fastlane/fastlane", "url": "https://github.com/fastlane/fastlane/issues/16387", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1035048429
Edit promotional_text for live app in "Ready for Sale" state New Issue Checklist [x] Updated fastlane to the latest version [x] I read the Contribution Guidelines [x] I read docs.fastlane.tools [x] I searched for existing GitHub issues Issue Description I tried to use deliver to update the promotional text of a live app without success. I used both edit_live and use_live_version, but neither allow me to change the metadata for the current live app, only for future releases. Command executed upload_to_app_store(edit_live: true, use_live_version: true, submit_for_review: false, force: true, app_identifier: ENV['APP_IDENTIFIER'], app_version: ENV['APP_VERSION'], metadata_path: ENV['METADATA_PATH'], run_precheck_before_submit: false, automatic_release: true, skip_screenshots: true, skip_binary_upload: true, reject_if_possible: false, ignore_language_directory_validation: true) Complete output when running fastlane, including the stack trace and command used [13:56:32]: fastlane finished with errors Looking for related GitHub issues on fastlane/fastlane... Found no similar issues. To create a new issue, please visit: https://github.com/fastlane/fastlane/issues/new Run fastlane env to append the fastlane environment to your issue bundler: failed to load command: fastlane (/usr/local/bin/fastlane) ArgumentError: [!] Enqueue Array instead of NilClass /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane_core/lib/fastlane_core/queue_worker.rb:26:in batch_enqueue' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/deliver/lib/deliver/upload_metadata.rb:210:in upload' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/deliver/lib/deliver/runner.rb:146:in upload_metadata' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/deliver/lib/deliver/runner.rb:55:in run' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/actions/upload_to_app_store.rb:22:in run' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/runner.rb:263:in block (2 levels) in execute_action' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/actions/actions_helper.rb:69:in execute_action' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/runner.rb:255:in block in execute_action' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/runner.rb:229:in chdir' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/runner.rb:229:in execute_action' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/runner.rb:157:in trigger_action_by_name' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/fast_file.rb:159:in method_missing' Fastfile:7:in block in parsing_binding' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/lane.rb:33:in call' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/runner.rb:49:in block in execute' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/runner.rb:45:in chdir' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/runner.rb:45:in execute' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/lane_manager.rb:47:in cruise_lane' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/command_line_handler.rb:36:in handle' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/commands_generator.rb:109:in block (2 levels) in run' /Library/Ruby/Gems/2.6.0/gems/commander-4.6.0/lib/commander/command.rb:187:in call' /Library/Ruby/Gems/2.6.0/gems/commander-4.6.0/lib/commander/command.rb:157:in run' /Library/Ruby/Gems/2.6.0/gems/commander-4.6.0/lib/commander/runner.rb:444:in run_active_command' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane_core/lib/fastlane_core/ui/fastlane_runner.rb:117:in run!' /Library/Ruby/Gems/2.6.0/gems/commander-4.6.0/lib/commander/delegates.rb:18:in run!' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/commands_generator.rb:353:in run' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/commands_generator.rb:42:in start' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/fastlane/lib/fastlane/cli_tools_distributor.rb:122:in take_off' /Library/Ruby/Gems/2.6.0/gems/fastlane-2.197.0/bin/fastlane:23:in <top (required)>' /usr/local/bin/fastlane:23:in load' /usr/local/bin/fastlane:23:in `<top (required)>' Environment ✅ fastlane environment ✅ Stack Key Value OS 11.6 Ruby 2.6.3 Bundler? false Git git version 2.30.1 (Apple Git-130) Installation Source /usr/local/bin/fastlane Host macOS 11.6 (20G165) Ruby Lib Dir /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib OpenSSL Version LibreSSL 2.8.3 Is contained false Is homebrew false Is installed via Fabric.app false Xcode Path /Applications/Xcode.app/Contents/Developer/ Xcode Version 13.0 Swift Version 5.5 System Locale Variable Value LANG en_US.UTF-8 ✅ LC_ALL en_US.UTF-8 ✅ LANGUAGE fastlane gems Gem Version Update-Status fastlane 2.197.0 ✅ Up-To-Date Loaded fastlane plugins: No plugins Loaded Loaded gems Gem Version did_you_mean 1.3.0 rouge 2.0.7 xcpretty 0.3.0 terminal-notifier 2.0.0 terminal-table 1.8.0 addressable 2.8.0 multipart-post 2.0.0 word_wrap 1.0.0 optparse 0.1.1 artifactory 3.0.15 colored 1.2 highline 2.0.3 commander 4.6.0 gh_inspector 1.1.3 security 0.1.3 rexml 3.2.5 nanaimo 0.3.0 colored2 3.1.2 claide 1.0.3 CFPropertyList 3.0.4 atomos 0.1.3 xcodeproj 1.21.0 unicode-display_width 1.8.0 plist 3.6.0 public_suffix 4.0.6 tty-screen 0.8.1 tty-cursor 0.7.1 tty-spinner 0.9.3 babosa 1.0.4 excon 0.87.0 unf_ext 0.0.8 unf 0.1.4 domain_name 0.5.20190701 http-cookie 1.0.4 ruby2_keywords 0.0.5 faraday-rack 1.0.0 faraday-patron 1.0.0 faraday-net_http_persistent 1.2.0 faraday-net_http 1.0.1 faraday-httpclient 1.0.1 faraday-excon 1.1.0 faraday-em_synchrony 1.0.0 faraday-em_http 1.0.0 faraday 1.8.0 faraday-cookie_jar 0.0.7 faraday_middleware 1.2.0 fastimage 2.2.5 json 2.6.1 mini_magick 4.11.0 naturally 2.2.1 rubyzip 2.3.2 xcpretty-travis-formatter 1.0.1 dotenv 2.7.6 bundler 2.1.4 simctl 1.6.8 jwt 2.3.0 webrick 1.7.0 httpclient 2.8.3 multi_json 1.15.0 signet 0.16.0 os 1.1.1 memoist 0.16.2 googleauth 1.1.0 mini_mime 1.1.2 retriable 3.1.2 trailblazer-option 0.1.1 declarative 0.0.20 uber 0.1.0 representable 3.1.1 google-apis-core 0.4.1 google-apis-playcustomapp_v1 0.5.0 google-apis-androidpublisher_v3 0.12.0 rake 13.0.6 digest-crc 0.6.4 google-apis-storage_v1 0.8.0 google-apis-iamcredentials_v1 0.7.0 google-cloud-errors 1.2.0 google-cloud-env 1.5.0 google-cloud-core 1.6.0 google-cloud-storage 1.34.1 emoji_regex 3.2.3 aws-eventstream 1.2.0 aws-sigv4 1.4.0 aws-partitions 1.518.0 jmespath 1.4.0 aws-sdk-core 3.121.3 aws-sdk-kms 1.50.0 aws-sdk-s3 1.104.0 forwardable 1.2.0 logger 1.3.0 date 2.0.0 stringio 0.0.2 ipaddr 1.2.2 openssl 2.1.2 zlib 1.0.0 mutex_m 0.1.0 connection_pool 2.2.2 net-http-persistent 3.1.0 net-http-pipeline 1.0.1 ostruct 0.1.0 strscan 1.0.0 io-console 0.4.7 fileutils 1.1.0 etc 1.0.1 libxml-ruby 3.2.1 psych 3.1.0 generated on: 2021-10-25 Hi! Does anyone has an update on this issue? Thank you Hi! We have the same issue. We use deliver to update the promotional text of our live App Store version and we got the same error. I'm encountering the same error when using deliver with the edit_live flag enabled. This issue is still reproducible with the latest fastlane version 2.204.2. Fastlane deliver( edit_live: true, force: true, run_precheck_before_submit: false, skip_screenshots: true, promotional_text: { "en-US" => "My Promo Text", "de-DE" => "Mein Promo-Text", } ) Stacktrace /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane_core/lib/fastlane_core/queue_worker.rb:26:in `batch_enqueue': \e[31m[!] Enqueue Array instead of NilClass\e[0m (ArgumentError) from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/deliver/lib/deliver/upload_metadata.rb:210:in `upload' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/deliver/lib/deliver/runner.rb:146:in `upload_metadata' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/deliver/lib/deliver/runner.rb:55:in `run' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/actions/upload_to_app_store.rb:22:in `run' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/runner.rb:263:in `block (2 levels) in execute_action' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/actions/actions_helper.rb:69:in `execute_action' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/runner.rb:255:in `block in execute_action' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/runner.rb:229:in `chdir' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/runner.rb:229:in `execute_action' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/runner.rb:157:in `trigger_action_by_name' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/fast_file.rb:159:in `method_missing' from configuration/Fastfile_Discounts:52:in `block (2 levels) in parsing_binding' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/lane.rb:33:in `call' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/runner.rb:49:in `block in execute' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/runner.rb:45:in `chdir' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/runner.rb:45:in `execute' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/lane_manager.rb:47:in `cruise_lane' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/command_line_handler.rb:36:in `handle' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/commands_generator.rb:109:in `block (2 levels) in run' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/commander-4.6.0/lib/commander/command.rb:187:in `call' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/commander-4.6.0/lib/commander/command.rb:157:in `run' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/commander-4.6.0/lib/commander/runner.rb:444:in `run_active_command' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane_core/lib/fastlane_core/ui/fastlane_runner.rb:124:in `run!' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/commander-4.6.0/lib/commander/delegates.rb:18:in `run!' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/commands_generator.rb:353:in `run' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/commands_generator.rb:42:in `start' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/fastlane/lib/fastlane/cli_tools_distributor.rb:122:in `take_off' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/fastlane-2.204.2/bin/fastlane:23:in `<top (required)>' from /Users/felix/.rvm/gems/ruby-3.0.0/bin/fastlane:23:in `load' from /Users/felix/.rvm/gems/ruby-3.0.0/bin/fastlane:23:in `<top (required)>' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/bundler-2.2.31/lib/bundler/cli/exec.rb:58:in `load' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/bundler-2.2.31/lib/bundler/cli/exec.rb:58:in `kernel_load' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/bundler-2.2.31/lib/bundler/cli/exec.rb:23:in `run' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/bundler-2.2.31/lib/bundler/cli.rb:478:in `exec' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/bundler-2.2.31/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/bundler-2.2.31/lib/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/bundler-2.2.31/lib/bundler/vendor/thor/lib/thor.rb:392:in `dispatch' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/bundler-2.2.31/lib/bundler/cli.rb:31:in `dispatch' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/bundler-2.2.31/lib/bundler/vendor/thor/lib/thor/base.rb:485:in `start' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/bundler-2.2.31/lib/bundler/cli.rb:25:in `start' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/bundler-2.2.31/exe/bundle:49:in `block in <top (required)>' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/bundler-2.2.31/lib/bundler/friendly_errors.rb:103:in `with_friendly_errors' from /Users/felix/.rvm/gems/ruby-3.0.0/gems/bundler-2.2.31/exe/bundle:37:in `<top (required)>' from /Users/felix/.rvm/gems/ruby-3.0.0/bin/bundle:23:in `load' from /Users/felix/.rvm/gems/ruby-3.0.0/bin/bundle:23:in `<main>' The issue still occurs in the latest fastlane version 2.204.3. See my previous comment for the configuration and stack trace. The error message remains unchanged in the latest fastlane version 2.205.2. The error message remains unchanged in the latest fastlane version 2.208.0
gharchive/issue
2021-10-25T12:01:48
2025-04-01T04:34:14.614727
{ "authors": [ "FelixLisczyk", "bunuelcubosoto", "tiagomartinho" ], "repo": "fastlane/fastlane", "url": "https://github.com/fastlane/fastlane/issues/19522", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
599059616
[CI] Execute tests on macOS (Xcode 11.4.0, Ruby 2.6) Checklist [x] I've run bundle exec rspec from the root directory to see all new and existing tests pass [x] I've followed the fastlane code style and run bundle exec rubocop -a to ensure the code style is valid [x] I've read the Contribution Guidelines [x] I've updated the documentation if necessary. Motivation and Context Circle ci has released Xcode 11.4 image. It is good to support the latest xcode version that is available. Description Tests are now running on Xcode 11.4.0 instead of 11.3.0 That makes sense to me! Let’s just replace 11.3 with 11.4 💪
gharchive/pull-request
2020-04-13T18:39:07
2025-04-01T04:34:14.619110
{ "authors": [ "joshdholtz", "tedgonzalez" ], "repo": "fastlane/fastlane", "url": "https://github.com/fastlane/fastlane/pull/16300", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2458108421
Fix target selection in get_version_number Checklist [x] I've run bundle exec rspec from the root directory to see all new and existing tests pass [x] I've followed the fastlane code style and run bundle exec rubocop -a to ensure the code style is valid [x] I see several green ci/circleci builds in the "All checks have passed" section of my PR (connect CircleCI to GitHub if not) [x] I've read the Contribution Guidelines [ ] I've updated the documentation if necessary. [x] I've added or updated relevant unit tests. Motivation and Context get_version_number() tries to automatically select a target if non is specified. This automatic selection was changed in https://github.com/fastlane/fastlane/pull/12138 to only consider non-test targets, but failed to return the right non-test target and instead just returned the first of all targets. Depending on the order of targets in the Xcode project a test target might be returned. Description This PR changes the automatic target selection to return the first non-test target in case it is the only such target. It also adjusts the respective unit tests to use test targets that expose the fixed bug (i.e. position of non-test target in all targets). Testing Steps n/a targetA changed Sorry, but I don't understand what this means.
gharchive/pull-request
2024-08-09T14:52:54
2025-04-01T04:34:14.623946
{ "authors": [ "svenmuennich" ], "repo": "fastlane/fastlane", "url": "https://github.com/fastlane/fastlane/pull/22178", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
417790354
[WIP] Configuration file So, this effort is to implement the support of the config file for fastpack. Command-line arguments work reasonably well, but it is really tedious to use those in a cross-platform environment (read, Windows vs. *X). The idea here is to implement very basic JSON config file which effectively mirrors what we've got so far in the CLI. Here are TODO items in no particular order: [x] Add CLI argument -c somefile.json to read from [x] If not provided - read from the ./fastpack.json if it exists [ ] Support merging of the provided values from both sources: CLI & config file [ ] for scalar values (--no-cache, --development, --output etc.): CLI value wins over the config's one [ ] for list values (--node-modules, --preprocess): if config or CLI arguments are specified - those lists are concatenated (CLI one goes first), otherwise the default list is used [ ] for entryPoints: even this parameter is a list, but the strategy is essentially the same as for the scalar values [x] Remove --postprocess: it refers to an old implementation of the production mode, which we need to rework anyway [x] Remove --target: I feel like it was specified incorrectly from the very beginning. It should probably be Browser | Node | Electron? [ ] Add fpack explain-config command which will print out the parsed & merged config with required explanation and comments [ ] Use pastel.lib for terminal colors Maybe also "fastpack" field in package.json for those who have enough files in the folder? Thought about it some. What gets the priority in this case (we have 4 sources: package.json, fastpack.json, CLI, default values)? Use this for inspiration https://github.com/davidtheclark/cosmiconfig#cosmiconfig I think like this CLI to allow override all config options on the fly package.json as project config --config or fastpack.json as external config defaults Agreed I vote against fastpack field in package.json — let's have a single way for fastpack to discover config — via files. We can revisit it later but I think having config in separate files is just more flexible — you can have multiple of them and choose the one via -c/--config option. You cannot do anything with "fastpack" section in package.json — it's just some default location, another one which adds to confusion.
gharchive/pull-request
2019-03-06T12:57:41
2025-04-01T04:34:14.656326
{ "authors": [ "TrySound", "andreypopp", "zindel" ], "repo": "fastpack/fastpack", "url": "https://github.com/fastpack/fastpack/pull/156", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1511470293
[bug] yellow color does not take effect in powershell Env: OS: windows 10 go: 1.17.2 bash: powershell Demonstration effect picture: https://gitee.com/fanybook/thirdparty-bugs/blob/master/github.com/fatih/color/yellow_powershell_bug_20221227140205.png I am having the same problem when trying to get it to show yellow in PowerShell. Here is what I am running: Window 10 PowerShell 5.1.19041.2364 Go 1.20.1 If you need anymore info let me know!
gharchive/issue
2022-12-27T06:09:34
2025-04-01T04:34:14.673338
{ "authors": [ "Hunter-Pittman", "fanybook" ], "repo": "fatih/color", "url": "https://github.com/fatih/color/issues/176", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
170066545
Efficiency is weak Could we implement some of these ideas: https://www.reddit.com/r/pokemongodev/comments/4wqkeg/efficient_server_sparing_scanning/ The basic idea is to cache spawn points. We know they spawn hourly, so we just need to full scan for an hour to completely map the spawnpoints in an area. Then we can scan even faster by scanning only the necessary spawn points during any given minute. Well, the scanning goes quite fast but using spawnspoints could make you miss pokemons. Also the number of spawn points in a decently populated area is far more than the number of location a scanner like this uses. Also as a rural user, I find that new spots pop up about once a week. I guess I'm thinking about the "bigger picture". The truth is Niantic is going to keep fighting back as long as we are spamming their server with a ridiculously large number of requests. If we created a map of spawnpoints we would have that information forever, without the need for constantly bombarding their API. If we can ease up on requests, they'll stop fighting us. Probably once a day you'd want to re-scan your area for new spawn points. That should take 1 hour. the other 23 hrs of the day you would only need to make a small fraction of the requests we're currently making. There's a couple other projects doing that already, but none as good as this one. Its a feature that every dev should be aiming for. A script like this which only relies on constant requests and makes no effort to improve in efficiency will get a C&D sooner than an app which limits server requests by caching. Read that reddit post I linked, that guy has the right idea and already released a script that does those things, but its not bundled with any map or notifications. @kag7 Every project which uses the API is bound to be C&D, that we make tons of request or not. I do get that you think this might be better but it's not because it will miss Pokemon's and the number of spawnpoints could create more requests than what we are doing depending on the location you are scanning. Urban areas have dozens and dozens of spawnpoints which would make it less efficient. #199
gharchive/issue
2016-08-09T02:11:44
2025-04-01T04:34:14.711119
{ "authors": [ "Lionir", "ShutUpPetey", "khag7", "nborrmann" ], "repo": "favll/pogom", "url": "https://github.com/favll/pogom/issues/125", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1994300792
Add http request best practices https://www.loginradius.com/blog/engineering/tune-the-go-http-client-for-high-performance/ (Timeout) https://mailazy.com/blog/http-request-golang-with-best-practices/ (Query Args) Implemented http client reusing and timeouts Added Multi IoT Agent support and Device managment
gharchive/issue
2023-11-15T08:37:17
2025-04-01T04:34:14.747024
{ "authors": [ "fbuedding" ], "repo": "fbuedding/iota-admin", "url": "https://github.com/fbuedding/iota-admin/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
147219396
'nb_epoch' affect the accuracy in the 1st epoch train_set_x, train_set_y, valid_set_x, valid_set_y, test_set_x, test_set_y = load_data() model = Sequential() # input: 6 64x64 images -> (6, 64, 64) tensors. # this applies 3 convolution filters of size 5x5, 3x3, 3x3 model.add(Convolution2D(37, 5, 5, input_shape = (6, 64, 64), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) # 64 - 5 + 1 = 60 / 2 = 30 model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) # 30 - 3 + 1 = 28 / 2 = 14 model.add(Convolution2D(800, 3, 3, border_mode='valid', activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) # 14 - 3 + 1 = 12 / 2 = 6 model.add(Flatten()) model.add(Dense(1200, activation = 'relu')) model.add(Dropout(0.5)) model.add(Dense(82, activation = 'sigmoid')) sgd = SGD(lr = 0.1, decay = 1e-6, momentum=0.9, nesterov = True) model.compile(optimizer = sgd, loss = 'binary_crossentropy') model.fit(train_set_x, train_set_y, batch_size = 400, nb_epoch = 30, validation_data = (valid_set_x, valid_set_y), show_accuracy=True) When nb_epoch is 30, I got loss: 1.8462 - acc:0.3767 on Epoch 1/30 When nb_epoch is 150, I got loss: 1.5073 - acc:0.1432 on Epoch 1/150. Is this normal? Do you fix the random seed? import numpy as np np.random.seed(1337) @tboquet Thanks, it works and how the random seed affect the result? Your data is being shuffled during training. Fixing the random seed, you have the same sequence of mini-batches every time you run the code. @tboquet Thank you so much.
gharchive/issue
2016-04-10T12:10:36
2025-04-01T04:34:14.753974
{ "authors": [ "chaiyujin", "tboquet" ], "repo": "fchollet/keras", "url": "https://github.com/fchollet/keras/issues/2248", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
237156615
I set the layer.trainable = False but the param still changed after train,why? Please make sure that the boxes below are checked before you submit your issue. If your issue is an implementation question, please ask your question on StackOverflow or join the Keras Slack channel and ask there instead of filing a GitHub issue. Thank you! [x] Check that you are up-to-date with the master branch of Keras. You can update with: pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps [x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here. [ ] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with: pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps [ ] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short). when i was building model : model = ResNet50(include_top=False, input_tensor=preproc) mid_start = model.get_layer('res5b_branch2a') all_layers = model.layers for i in range(model.layers.index(mid_start)): all_layers[i].trainable = False mid_out = model.layers[model.layers.index(mid_start) - 1] rn_top = Model(model.input, mid_out.output) then i add my own layers: model = Sequential() model.add(rn_top) model1 = AveragePooling2D((7, 7))(model.output) model1 = identity_block(model1, 3, [512, 512, 2048], stage=5, block='b') model1 = identity_block(model1, 3, [512, 512, 2048], stage=5, block='c') model1 = Flatten()(model1) model1 = Dense(1024)(model1) model1 = BatchNormalization()(model1) model1 = Dropout(0.25)(model1) model1 = Dense(1024)(model1) model1 = BatchNormalization()(model1) model1 = Dropout(0.2)(model1) x_class = Dense(2, activation='softmax', name='class')(model1) final_model = Model(rn_top.input, x_class) then i init two new Resnet50 models ,and put final_model.layers[0:155](res5b_branch2a is the 155 layer in Resnet50)'s param to one of the models. then predict the output of the res5b_branch2a layer get featuremap1 and featuremap2. finally , i use numpy.allclose(featuremap1, featuremap2), but i get False, why? And when i run final_model.summary() i get : Total params: 26,745,730 Trainable params: 12,085,250 Non-trainable params: 14,660,480 Because you are using batch norm, which updates its internal state regardless of trainability status (i.e. it updates its batch statistics, which are non-trainable). This is expected behavior. Note to those who come across this BatchNormalization now properly locks when trainable=False.
gharchive/issue
2017-06-20T10:02:52
2025-04-01T04:34:14.760382
{ "authors": [ "ahundt", "fchollet", "hanzy123" ], "repo": "fchollet/keras", "url": "https://github.com/fchollet/keras/issues/7051", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
240189403
Stacked Autoencoder @fchollet 's blog : Building Autoencoders in Keras. In the Let's build the simplest possible autoencoder section, the author provided a demo: from keras.layers import Input, Dense from keras.models import Model encoding_dim = 32 input_img = Input(shape=(784,)) encoded = Dense(encoding_dim, activation='relu')(input_img) decoded = Dense(784, activation='sigmoid')(encoded) autoencoder = Model(input_img, decoded) encoder = Model(input_img, encoded) encoded_input = Input(shape=(encoding_dim,)) decoder_layer = autoencoder.layers[-1] decoder = Model(encoded_input, decoder_layer(encoded_input)) autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') from keras.datasets import mnist import numpy as np (x_train, _), (x_test, _) = mnist.load_data() autoencoder.fit(x_train, x_train, epochs=50, batch_size=256, shuffle=True, validation_data=(x_test, x_test)) encoded_imgs = encoder.predict(x_test) decoded_imgs = decoder.predict(encoded_imgs) questions: 1, Why do we not use decode_imgs = autoencoder.predict(x_test) to obtain the reconstructed x_test? 2, The encoder and decoder model are not trained, why can we use them to map the data directly? Thanks! The encoder was built for the purpose of explaining the concept of using an encoding scheme as the first part of an autoencoder. The encoder was created with the instruction, "Let's also create a separate encoder model:". This tells you that although it's not required, we are still creating it. I'm not sure what you mean by "map the data". Which line are you referring to as "mapping the data"? @voletiv thanks for your reply. In my opinion, why we can use decode_imgs = autoencoder.predict(x_test) is because we fitted it before the prediction. while in this demo, the encoder and decoder are not fitted before prediction. @Bjoux2 Ok I understand your doubt. When we defined autoencoder as autoencoder = Model(input_img, decoded), we simply name that sequence of layers that maps input_img to decoded as a "autoencoder". If you are familiar with C/C++, this is like a pointer. So, when you run autoencoder.fit(x_train, x_train,..., you are training the weights corresponding to the layers whom you have named "autoencoder". Similarly, when you run encoder = Model(input_img, encoded), you are only naming the sequence of layers that maps input_img to encoded. Going by the pointer analogy, the name "encoder" simply points to the same set of layers as the first half of the name "autoencoder". So when you run autoencoder.fit(x_train, x_train,..., the "encoder" layers are being trained. Got it? @voletiv Got it, thanks, it is really helpful.
gharchive/issue
2017-07-03T14:28:42
2025-04-01T04:34:14.766182
{ "authors": [ "Bjoux2", "voletiv" ], "repo": "fchollet/keras", "url": "https://github.com/fchollet/keras/issues/7220", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
260012187
Progress Bar Logging not displaying properly I have confirmed that I am running the master branch of keras and tensorflow. I am running on tensorflow-gpu on Windows 10 with Python 3.6.1. Most of the time running model.fit with verbose=1 properly displays the progress bar logging. However, randomly the progress bar logging goes awry and prints out every update (see image below) -- is there any known cause of this behavior? Typically if I close down my jupyter notebook and restart, the progress bar logging will work the next time (and then eventually cut out). Ultimately, when this issue arises I can just switch to verbose=2, but then I am never quite sure how much longer an epoch will take so it is less than ideal. Does anybody have any idea why this might be happening? I run K.clear_session() to try to reset everything, but not sure why this progress bar would run into issues. I'm also facing a similar issue with Keras in Windows 10 and Chrome. In Edge the logging is working fine. Interesting - I had not considered that this was specific to Chrome. @jithurjacob does this always happen to you? Mine appears to sometimes work well if I restart a new kernel, but then eventually it runs into issues. Did you find any way around other than using edge? @jcomfort4 I tried restarting the kernel but still, the issue persists in chrome. In Edge, it is working fine. Mine is still having issues in both Edge & Chrome. Looking at your screenshot it looks like we are facing separate issues. You can fix the problem simply by making your terminal window bigger @ramadawn This does work if I'm running from command line, but my issue is within jupyter notebook and I haven't found a great way to make the jupyter notebook output window larger. I also have this exact issue, when running keras on jupyter notebook on firefox 58 on Windows 10 x64, anaconda python 3.6. Have not been able to find a fix or workaround, except using tqdm-keras for methods with callbacks. Interestingly, I do not have this issue running the same versions on Ubuntu. I have a similar issue, currently i'm training my model using the following model.fit history = finetune_model.fit_generator(train_generator, epochs=NUM_EPOCHS, workers=1, steps_per_epoch=num_train_images // batch_size, validation_data=(x_val, y_val_)) And also i'm using the docker image from dockerhub tensorflow/tensorflow:1.15.0-gpu-py3-jupyter Here is the current showing: Epoch 38/40 61/62 [============================>.] - ETA: 0s - loss: 0.4109 - acc: 0.9536Epoch 1/40 420/62 [===========================================================================================================================================================================================================] - 2s 4ms/sample - loss: 0.6136 - acc: 0.7190 However in Colaboratory, the output is this: Epoch 38/40 62/62 [==============================] - 13s 212ms/step - loss: 0.4069 - acc: 0.8997 - val_loss: 0.7886 - val_acc: 0.752 Any idea how to fix it?
gharchive/issue
2017-09-23T14:35:45
2025-04-01T04:34:14.773663
{ "authors": [ "ambigus9", "guifereis", "jcomfort4", "jithurjacob", "ramadawn" ], "repo": "fchollet/keras", "url": "https://github.com/fchollet/keras/issues/7970", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
160219582
Fix tf-idf Fix #2974 Sounds reasonable
gharchive/pull-request
2016-06-14T16:01:11
2025-04-01T04:34:14.774676
{ "authors": [ "fchollet", "henry0312" ], "repo": "fchollet/keras", "url": "https://github.com/fchollet/keras/pull/2980", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1308121106
Handle upsert for update operations Right now we only support upser for replacement documents. Research and implement it for other operations, like $set or $inc. Fixed by #25
gharchive/issue
2022-07-18T15:20:46
2025-04-01T04:34:14.813460
{ "authors": [ "fcoury" ], "repo": "fcoury/oxide", "url": "https://github.com/fcoury/oxide/issues/24", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1404438065
adds line chart added the basic line chart with category colour feature Thanks for the work @utopian-monkey. Could you do a git merge main to include a fix I did on the CI checks? Also, a good coding practice would be to use pre-commits locally (pre-commit run -a) to make sure we use the same coding standards. I'll do a full code review asap Axis labels look a bit confusing. Maybe you could add "type": "quantitative" to both x and y axis? sure, will do these changes asap sure, will do these changes asap Thanks a lot! I just added a line in my .github/workflows/main.yml, I think it should activate the Github Actions in here too if you add it. please check, ive updated any updates? any updates? Could you update the code with the suggested changes above? i have already, please check this commit
gharchive/pull-request
2022-10-11T11:00:14
2025-04-01T04:34:14.822543
{ "authors": [ "fdebrain", "utopian-monkey" ], "repo": "fdebrain/streamlit-vega-lite-charts", "url": "https://github.com/fdebrain/streamlit-vega-lite-charts/pull/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2712266319
Spike 1 - Analyse Analyse / Merry ballenbak bubbelgum scroll driven animation 🌠 Stappenplan Doe vooronderzoek en bepaal de elementen van jouw stijl. Gebruik screenshots of Pinterest om een verzameling te maken. Neem een collage op in de betreffende issue. Schets een wireflow voor jouw concept, gebruik eventueel de morphologische kaart. Bedenk welke interacties, animaties en geluidsondersteuning je nodig hebt. Neem jouw schetsen op in de betreffende issue. Maak een breakdown schets waarin je onderzoekt hoe je jouw concept met HTML, CSS en Javascript kunt implementeren. Bespreek jouw schets met teamgenoten of docenten. Neem ook jouw breakdown schets op in de betreffende issue. Implementeer jouw concept in HTML/CSS/JS en Sveltekit. Beschrijf in de betreffende issue hoe je de belangrijkste dingen voor elkaar krijgt. Test uitvoerig en beschrijf verschillen in performance in de betreffende issue. Voer een pull-request uit waarin je de belangrijkste zaken uit bovenstaande issues opneemt. (nb. voer het PR nog niet door!) Publiceer jouw project via Vercel of Netlify naast de bestaande site (je selecteert dan jouw aparte branch). Laat je concept testen door de tribe bij een user-test tijdens de Design & Code review! Neem opmerkingen op in het pull-request. Voer opgedane inzichten door en voeg die als commits toe aan het pull-request. Vooronderzoek Doe vooronderzoek en bepaal de elementen van jouw stijl. Gebruik screenshots of Pinterest om een verzameling te maken. Neem een collage op in de betreffende issue. Volgende Stap ⏭ / Spike 1 - Ontwerpen
gharchive/issue
2024-12-02T15:06:13
2025-04-01T04:34:14.827961
{ "authors": [ "Jason2426" ], "repo": "fdnd-agency/voorhoede", "url": "https://github.com/fdnd-agency/voorhoede/issues/122", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
451226520
Change _Id to id Steps to reproduce const options = { name: 'pipeline', Model, paginate, id: 'id' }; // Initialize our service with any options it requires app.use('/pipeline', createService(options)); Expected behavior Service should use the _id property as "id" when querying and creating new elements. Actual behavior But It doesn't work, the service keeps creating documents with "_id" property and having the id as "_id" when querying. Am I doing something wrong? System configuration Module versions "@feathersjs/feathers": "version": "3.3.1" "feathers-mongoose": "version": "7.3.2", "mongoose": "version": "5.5.12" NodeJS version: v10.12.0 An _id property will always be generated by MongoDB. When changing the property the adapter will use that id field for get and other queries however. Keep in mind that custom id field values will not be generated automatically. Oh, I got with, I thought that the "id" was for specifying the "_id" alias. Thanks for the answer @daffl. Do you have any idea of how I should do It then? I just need an alias for the _id property because I'm migrating my project from feathers-knex to feathers-mongoose and my frontend already uses the property "id". I've tried putting a Mongoose alias in the schema, but It didn't work either. @daffl I tried to use this two libraries but without result. This issue I have is related to this other issue? LIbraries to change _id to id on the response (not on DB): https://github.com/meanie/mongoose-to-json https://github.com/abranhe/normalize-mongoose
gharchive/issue
2019-06-02T19:36:45
2025-04-01T04:34:14.877581
{ "authors": [ "daffl", "giancarllorojas", "matiaslopezd" ], "repo": "feathersjs-ecosystem/feathers-mongoose", "url": "https://github.com/feathersjs-ecosystem/feathers-mongoose/issues/325", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2004765768
chore(devimint): increase bitcoin rpc timeout on tests Due to this timeout some tests were failing on my local machine. There is no reason for it to be so low. LGTM, but I kind of expect the actual problem here is we're hitting default limit of rpc thread on bitcoind. We should increase it to something like 64, and maybe also add some retries. LGTM, but I kind of expect the actual problem here is we're hitting default limit of rpc thread on bitcoind (AFAIR: 4). We should increase it to something like 64, and maybe also add some retries. I'm not sure this is the issue, but increased rpcworkqueue and rpcthreads anyway
gharchive/pull-request
2023-11-21T17:05:17
2025-04-01T04:34:14.914960
{ "authors": [ "douglaz", "dpc" ], "repo": "fedimint/fedimint", "url": "https://github.com/fedimint/fedimint/pull/3674", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2024859212
fix(gateway): disconnect before changing the network Closes https://github.com/fedimint/fedimint/issues/3806 It would be good a test for that scenario, but let's just solve the problem for now. Successfully created backport PR for releases/v0.2: #3835
gharchive/pull-request
2023-12-04T22:16:00
2025-04-01T04:34:14.916841
{ "authors": [ "douglaz", "fedimint-backports" ], "repo": "fedimint/fedimint", "url": "https://github.com/fedimint/fedimint/pull/3833", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1406652912
Log issue Regarding issue #751 @okjodom is adding more log messages for debugging purposes a general thing which should be done? I like more targeted logs. They're helpful! Provided we have a good pattern for selecting log level based on context, my vote is on adding more logs... Codecov Report Base: 59.16% // Head: 59.11% // Decreases project coverage by -0.04% :warning: Coverage data is based on head (299ef9d) compared to base (9baa391). Patch coverage: 25.00% of modified lines in pull request are covered. Additional details and impacted files @@ Coverage Diff @@ ## master #779 +/- ## ========================================== - Coverage 59.16% 59.11% -0.05% ========================================== Files 101 101 Lines 16136 16146 +10 ========================================== - Hits 9547 9545 -2 - Misses 6589 6601 +12 Impacted Files Coverage Δ client/cli/src/main.rs 0.29% <0.00%> (-0.01%) :arrow_down: client/client-lib/src/lib.rs 85.48% <14.28%> (-0.58%) :arrow_down: client/client-lib/src/api.rs 83.33% <42.85%> (-0.81%) :arrow_down: client/client-lib/src/query.rs 91.57% <0.00%> (-3.16%) :arrow_down: core/api/src/server.rs 39.75% <0.00%> (-0.71%) :arrow_down: fedimint-server/src/lib.rs 86.08% <0.00%> (-0.37%) :arrow_down: fedimint-server/src/consensus/mod.rs 92.12% <0.00%> (-0.24%) :arrow_down: core/api/src/encode.rs 13.36% <0.00%> (+0.13%) :arrow_up: modules/fedimint-mint/src/lib.rs 91.40% <0.00%> (+0.18%) :arrow_up: fedimint-server/src/net/peers.rs 91.78% <0.00%> (+2.05%) :arrow_up: Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. :umbrella: View full report at Codecov. :loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
gharchive/pull-request
2022-10-12T18:48:54
2025-04-01T04:34:14.931532
{ "authors": [ "Maaxxs", "codecov-commenter", "okjodom" ], "repo": "fedimint/fedimint", "url": "https://github.com/fedimint/fedimint/pull/779", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
270768158
add some requirements versions specify minimum versions of oc CLI and Homebrew specify Go paths using the GOPATH envvar use 'go get...' command instead of 'git clone...' to clone into $GOPATH Add brackets () around the ansible installer command so that you are not left in the installer directory if the playbook fails The Homebrew version I specified is recent (from the last few days) so it might be a bit excessive. The install on my mac is still not working, so there may still be some version issues I've replaced the git clone commands and removed go get I think it still needs a branch in the instructions for using go and not using go, since the current (and previous) instructions require that the user is familiar with the go directory structure, but that would be better as a separate PR. @mmurphy I've pushed a few changes, if you could take a look b33969a Thanks @mmurphy :+1:
gharchive/pull-request
2017-11-02T18:56:06
2025-04-01T04:34:15.006536
{ "authors": [ "david-martin", "mmurphy" ], "repo": "feedhenry/mcp-standalone", "url": "https://github.com/feedhenry/mcp-standalone/pull/196", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1163392142
Readme should specify Python version (Either 2 or 3, I'm assuming this doesn't work on one of them) thanks for heads up. fixed in 9cdbeafdae5332ff301fd9cdd7d659f78f1be967
gharchive/issue
2022-03-09T01:39:33
2025-04-01T04:34:15.013007
{ "authors": [ "ditchfieldcaleb", "sslivkoff" ], "repo": "fei-protocol/checkthechain", "url": "https://github.com/fei-protocol/checkthechain/issues/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
310799557
Resize image on the fly problem Hello! I like your lightbox very much, but I have some problems with it. Lightbox don't get any image, when the image's link looks like http://path/img.php?id='1234125'. I tried to use a filter option ({filter: '/.+\\.(gif|jpe?g|png|webp|php)/i'} and {filter: '/.+\\.(gif|jpe?g|png|webp|php\?id)/i'}), but that didn't work. Nevertheless the link is working and I get an image in the browser. Could you make it clear for me, please. Hello! Thank you very much for your advice! I'm a glad that you have found a mistake in the regex in my code. The IDE made a double slash, when I pasted a regex from README.md.
gharchive/issue
2018-04-03T11:35:01
2025-04-01T04:34:15.024330
{ "authors": [ "BorisZyryanov" ], "repo": "feimosi/baguetteBox.js", "url": "https://github.com/feimosi/baguetteBox.js/issues/177", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
164301263
[Request] More Railcraft compat. An upgrade for the computer cart to let it set/get the routing value, also events for track like locomotive track and whistle track. (So I could, say, fire computer.beep() on an active whistle track, also move to a specific destination using railcraft routing) Sorry. Currently I don't have much time. But the idea is good. I will try to add it as soon as possible.
gharchive/issue
2016-07-07T12:50:26
2025-04-01T04:34:15.048841
{ "authors": [ "cloakable", "feldim2425" ], "repo": "feldim2425/OC-Minecarts", "url": "https://github.com/feldim2425/OC-Minecarts/issues/14", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1895552326
Testing the project / Ideas to move forward with the installer I'm thinking in different options to make the installer easier and faster. I need ideas. Let's share some feedback. you mean installer script or something else? Yeah .. there's something wrong during the installation ... I'm trying to figure out how to make it faster and easier and effective in AUR ... yesterday I made some changes ... could you please test it ? If you have different ideas 💡 please let me know ... HI i tested these errors am getiing With raw pkgbuild you wrote with pyinstaller Here for test i preinstalled pyinstaller from aur before installing term-pdf and this error i got I just commented these codes before start installed in pkgbuild like this # if ! command -v pyinstaller &>/dev/null; then # pip install pyinstaller # fi You don't get it. I know there are errors. I need ideas as I said to make a better installer ... that's what I need. Ideas. and 1 think to ask how yo0u compiled the project in your system ? can u tell the process maybe that can help to get more packing idea the same way I tried to make it through AUR now ... but in AUR, is not working ... perhaps I need to go with a different approach .... I don't know, I need ideas ... lots of it ---- this project could be really interesting.. I'm always reading pdf files and to have the ability to read on the terminal would make my life really easy ... perhaps for many other people too ... so I guess it's promessing ... I just need to clear the path for a smooth installation ... . can u try this type things with venv enviroment https://git.pardesicat.xyz/opensource/blackarch-pkgbuilds/src/branch/master/PKGBUILD-python-standalone https://git.pardesicat.xyz/opensource/blackarch-pkgbuilds/src/branch/master/python-standalone.install have you done it in the past in a practical project of yours ? ... not mine project but here once i proposed this tool to add Blackarch so for demo i did with this tool as u can see https://git.pardesicat.xyz/pardesicat/blackarch-ddosattck-pkgbuild this one i tried with this pkgbuild which works fine https://git.pardesicat.xyz/pardesicat/blackarch-objection sometimes i do this kind of tesing stuffs go gain experience Very interesting !. Thanks. I'm thinking maybe to use a different programming language though .. idk .. I'm analysing any kind of alternatives .. ahhh all the best .... Are you good in programming .. software engineering..? no ,,, am no programmer .. infact am not a CS student am trying to learn cybersecurity by myself I see, but I deeply respect your self-driven learning and your insatiable thirst for knowledge!- :) Hei thanks... for the compliment. :) Hey there! Could you please test it? I found a way to make it work in any folder, which was my main and first idea. Actually, I created this project for my personal use, as it makes my life much easier. I prefer running terminals over other things—cleaner and uses less memory! I know there are things I need to check afterward to ensure it's working fine and not just an experimental project. However, I believe it should work in any folder now, just to scan and read PDFs in the terminal. Could you please test it? Clean the cache with: yay -Scc, and then reinstall it with: yay -S term-notes. I hope it's working on your machine. I tested it on mine, and it's working well. Thanks! Hi install was successfull but when i run its showing this Hi install was successfull but when i run its showing this Very strange on my machine I have no errors. Reading the prompt it could be an memory related issue .. I'll review the code now, debug it, and update the code in two important files that are executed. I'm a few minutes I'll let you know. Thanks ! PD.: it's not necessary a screenshot next time .. just pasting the output or the error here it's much better ... so I can copy easily and dive into errors. Thanks ! Alright ! @PardesiCat I changed a few things in the code, some other things related with the logic... (this is experimental.. right now the most important thing it's to make it work and find a practical logical path... for me, and the project.) And, It's working now, in two machines with Arch ... and even in MacOS with the bash installer .... Please could you please check it and test it on your system ? Remember, Clean the cache with: yay -Scc, and then reinstall it with: yay -S term-pdf ! :) right now the latest version is 0.0.3.9 ! :) Thanks a lot! Cheers! If it's working, to celebrate, you can share a delightful screenshot with the program running and showcasing your desktop. I love to see how Linux desktops are set up though ! 😊 HI i was already testing the .0.0.3.8 there some thing 2nd line install dm750 (not working and new i just notice u updated it before i comment and new hash sha256sums=('ff15c7a79a4b13eb067e28408e18c7333b609b7d76f2bb3039072df9125688ab') there 1thing i recommonded while i tested add gcc now in depends and chmod +x after install here i putted the working PKGBUILD and now everything working fine im putting the pkbuild i changed PKGBUILD # Mantenedor: Felipe Alfonso Gonzalez <f.alfonso@res-ear.ch> pkgname=term-pdf pkgver=0.0.3.8 pkgrel=1 pkgdesc="TermPDF Viewer is an open-source PDF file viewer designed to run in the terminal." arch=('x86_64') url="https://github.com/felipealfonsog/TermPDFViewer" license=('MIT') depends=('python-pip' 'python-pymupdf' 'gcc' ) source=("https://github.com/felipealfonsog/TermPDFViewer/archive/refs/tags/v.${pkgver}.tar.gz") sha256sums=('ff15c7a79a4b13eb067e28408e18c7333b609b7d76f2bb3039072df9125688ab') prepare() { tar xf "v.${pkgver}.tar.gz" -C "$srcdir" --strip-components=1 # cp "$srcdir"/term-pdf-wrp.c "$srcdir"/TermPDFViewer-v."$pkgver"/src/ } build() { cd "$srcdir"/TermPDFViewer-v."${pkgver}" gcc -o term-pdf-wrp "$srcdir"/TermPDFViewer-v."${pkgver}"/src/term-pdf-wrp.c } package() { install -Dm755 "$srcdir"/TermPDFViewer-v."${pkgver}"/src/term-pdf-wrp "${pkgdir}/usr/bin/term-pdf" chmod +x /usr/bin/term-pdf install -Dm755 "$srcdir"/TermPDFViewer-v."${pkgver}"/src/termpdf.py "${pkgdir}/$HOME/.config/termpdf.py" } I've made extensive changes from version 0.0.3.8, and the PKGBUILD used for it is no longer applicable. The current version is 0.0.3.9, and I'd appreciate it if you could test the installation to ensure everything is functioning smoothly. Please confirm that the installation of version 0.0.3.9 is successful. At this stage, my primary focus is on achieving functionality without any warnings or errors. I'm working on establishing a solid logic for the program, and secondary aspects will be reviewed once full functionality is achieved. Kindly please! share a screenshot of version 0.0.3.9 after installation and running! Your feedback is greatly appreciated. :) Thanks a bunch! v 0.0.3.9 working fine and it can detect pdf files yess now everything fine yess when i typing the issue you already uploaded 0.0.3.9 and working fine Yeah great thanks! It's also working on macOS! .. Yes, the project idea was cool. View the PDF inside Terminal. I am glad and happy to be part of your project. I have some suggestions to give. I will put them in more detail later. and here is the compiled arch package I im putting here in case you need to inspect. https://mirror.pardesicat.xyz/pardesicat-repository/pardesicat-repository/x86_64/term-pdf-0.0.3.9-1-x86_64.pkg.tar.zst working totally Awesome! Thanks!
gharchive/issue
2023-09-14T02:37:27
2025-04-01T04:34:15.081583
{ "authors": [ "PardesiCat", "felipealfonsog" ], "repo": "felipealfonsog/TermPDFViewer", "url": "https://github.com/felipealfonsog/TermPDFViewer/issues/56", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
3054429
Is formidable compatible with node 0.6? Hello, I had a working code using formidable and express 2.5.1 with node 0.4.x. But now after updating to node 0.6.9, I sometimes get an error and other times I just dont get any parsed data at all, much in the same style as when bodyParser started to decode multipart. Of course I am not using bodyParser and I think the code looks correct, pretty much as the working examples out there. I am using formiddable 1.0.8. Any ideas? the error looks like this: [ERROR] console - Error: parser error, 0 of 16384 bytes parsed at IncomingForm.write (/Users/manuel/dev/clouddisplay/node_modules/formidable/lib/incoming_form.js:141:17) at IncomingMessage. (/Users/manuel/dev/clouddisplay/node_modules/formidable/lib/incoming_form.js:91:12) at IncomingMessage.emit (events.js:67:17) at HTTPParser.onBody (http.js:115:23) at Socket.ondata (http.js:1403:22) at TCP.onread (net.js:354:27) ok, just as a complement note. I tried now with express.bodyParser and I get exactly the same error... so it is assumable this is an incompatibility with node 0.6... app.post('/upload/:uploadId', requiresLogin, express.bodyParser({ uploadDir: '/www/mysite.com/uploads' }), function(req,res){ console.log(req.files); }); Any workarounds? What does requiresLogin do? Does it perform some async calls, such as to a database? If so, try temporarily removing requiresLogin from the app.post() call and see how that affects things. Totally correct thank you very much. requiresLogin performs an asych database call, which somebody added without my knowledge :). So it works now, but whats the explanation for this behavior? On Feb 2, 2012, at 11:02 AM, Brian White wrote: What does requiresLogin do? Does it perform some async calls, such as to a database? If so, try temporarily removing requiresLogin from the app.post() call and see how that affects things. Reply to this email directly or view it on GitHub: https://github.com/felixge/node-formidable/issues/130#issuecomment-3775352 What's happening is the http request's 'data' events are being emitted while the async call to the database is happening. By the time the database action completes, some or all of the form data is then already "gone" and formidable has little to nothing to parse. Thus you have to have formidable start parsing right away. I see. This sounds a bit scary, what happens if I have several hundred users concurrently? Maybe while some users are uploading content others are doing operations that perform async operations, couldn't this behavior potentially happen in those situations as well? On Feb 2, 2012, at 1:22 PM, Brian White wrote: What's happening is the http request's 'data' events are being emitted while the async call to the database is happening. By the time the database action completes, some or all of the form data is then already "gone" and formidable has little to nothing to parse. Thus you have to have formidable start parsing right away. Reply to this email directly or view it on GitHub: https://github.com/felixge/node-formidable/issues/130#issuecomment-3777007 I'm not sure what you mean here. The missing 'data' events problem here is specific to each incoming http request. The issue has to do with the fact that there is no handler for the request's 'data' events when you go to do your async db calls. If you don't want the upload to continue if they are not logged in, you can try doing incomingForm._parser.end(); to stop parsing (only if the entire form hasn't already been parsed). This will however emit an error if it is not finished parsing, so you'll probably need some flag to know when to ignore the error and when not to. You could probably also use this same technique if want to limit file sizes (I don't see a maxFileSize akin to maxFieldsSize for IncomingForm). Yes, I was confused because I did not understand if this was a per request problem or, being node single threaded, other requests would also affect it. Thanks for your efforts trying to enlighten me. I still don't grasp the reason for the failure, I would need to go through the source code to understand the details. Nevertheless It seems as a quite inconvenient limitation, if I understand it correctly the implication is that I can not use any asynchronous middleware with formidable. The obvious question that follows is if this problem is an architectural problem with express or formidable and if any solution is in the works? On Feb 2, 2012, at 5:30 PM, Brian White wrote: I'm not sure what you mean here. The missing 'data' events problem here is specific to each incoming http request. The issue has to do with the fact that there is no handler for the request's 'data' events when you go to do your async db calls. If you don't want the upload to continue if they are not logged in, you can try doing incomingForm._parser.end(); to stop parsing (only if the entire form hasn't already been parsed). This will however emit an error if it is not finished parsing, so you'll probably need some flag to know when to ignore the error and when not to. You could probably also use this same technique if want to limit file sizes (I don't see a maxFileSize akin to maxFieldsSize for IncomingForm). Reply to this email directly or view it on GitHub: https://github.com/felixge/node-formidable/issues/130#issuecomment-3780975 This is not a problem specific to any module really. It's how node core works, since everything is asynchronous. TCP (and thus HTTP) streams in node emit 'data' events every time a chunk of data is received from the client (this can often happen within the same tick where the incoming connection event occurs). If nobody is listening for data on that stream, then the data is discarded. This technique allows for low memory usage since chunks are emitted as they come in and they are not stored or buffered up anywhere. Express also does not buffer requests' 'data' events for the same reason node core doesn't (memory usage). Express could support something like that, but I doubt you'll see that added anytime soon. Something else you might try is "pausing" the incoming request via req.pause();. This will cause no more 'data' events to be emitted from that request and will start telling the other side to stop sending data. However, one or more 'data' events may already be "in the pipeline" and will be emitted even after you pause the incoming request. After you have completed your authentication, you should then be able to do req.resume(); to resume the stream and continue emitted 'data' events. So, you CAN use async middleware with formidable, but you need to have formidable load before any of these async middleware so that it can capture all of these 'data' events.
gharchive/issue
2012-02-01T16:30:45
2025-04-01T04:34:15.103798
{ "authors": [ "manast", "mscdex" ], "repo": "felixge/node-formidable", "url": "https://github.com/felixge/node-formidable/issues/130", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
59163178
Nested array escaping not working correctly I seem to be running into an issue here: console.log(require('mysql').format('INSERT INTO table (col1, col2) VALUES ?', [['hi', 'yes'], ['bye', 'no']])); This currently returns (on latest version 2.5.5): INSERT INTO table (col1, col2) VALUES 'hi', 'yes' Shouldn't it be returning the following...? INSERT INTO table (col1, col2) VALUES ('hi', 'yes'), ('bye', 'no') Was looking through the code and don't see any glaring issues. Okay wait, I think it's getting confused because passing an array to .format() is also how you specify multiple parameters to format. But shouldn't it be smart enough to realize that I'm only escaping one value (hence the single '?' in my SQL statement)? At any rate, the solution is to put your nested array inside another array, as the first parameter: console.log(require('mysql').format('INSERT INTO table (col1, col2) VALUES ?', [[['hi', 'yes'], ['bye', 'no']]])); Which results in the proper formatting: INSERT INTO table (col1, col2) VALUES ('hi', 'yes'), ('bye', 'no') Yes, you got in in your update: you were simply passing the wrong value in to .format. The value takes an array, and since you need to pass in an array or arrays, you need to have three levels of arrays in the end: the first level is the array of values, where each element in the array corresponds to a ? (think [val1]). Then you need val1 to be an array of arrays (think val1 = [['a', 'b'], ['c', 'd']]). Now, if you put those together, you end up with [[['a', 'b'], ['c', 'd']]]. The fact that you can do a single value was a big API mistake and I intend to remove it whenever the next major arrives. You should always pass in an array of values. Yeah, that's a good point. I'll make sure to always pass in an array to avoid confusion :) The fact that you can do a single value was a big API mistake and I intend to remove it whenever the next major arrives. You should always pass in an array of values. I feel the same ( I was actually not aware for a long time that values param is converted to array if it's not array ). I don't think anyone will notice, but yes, that's major version semver wise
gharchive/issue
2015-02-26T23:33:13
2025-04-01T04:34:15.110395
{ "authors": [ "alexandreroche", "dougwilson", "sidorares" ], "repo": "felixge/node-mysql", "url": "https://github.com/felixge/node-mysql/issues/1018", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
935398500
SUNEVENT in Normal-Mode ... Missing Custom option Awesome update with all the SUNEVENTS...thx a lot .. works great in EVENT-Mode In normal mode the CUSTOM option is missing, therefore you always need 2 SUNEVENTS in your schedule (e.g. SUNSET till 10pm is not possible) Addtionally: Once set for SUNEVENTS ...it is not possible to change back to time-based schedule (I guess due to missing CUSTOM option) great job and very quick response ... thx a lot I already linked the reason why it is "missing" in the other issue that you've opened. Please see https://github.com/fellinga/node-red-contrib-ui-time-scheduler/issues/33 again. thx .. you still can have 2 SUNEVENTS .. e.g. SUNSET till 10pm is not possible but i can work-around with SUNSET till SUNSET+2h ;) thx a lot you still can have 2 SUNEVENTS .. e.g. SUNSET till 10pm is not possible Sunset till 10pm are not two sun events. Sunset till 10pm would be a start sun event and a fixed stop time and I am not allowing it because of what I answered in the link that I have posted. It says that I do not allow MIXED time frames in default mode because validation is almost impossible - but you can easily archive that with event mode.
gharchive/issue
2021-07-02T04:20:57
2025-04-01T04:34:15.117668
{ "authors": [ "fellinga", "xX-Nexus-Xx" ], "repo": "fellinga/node-red-contrib-ui-time-scheduler", "url": "https://github.com/fellinga/node-red-contrib-ui-time-scheduler/issues/46", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
226621003
Stop using so many global functions We're not in primary school here, we need to move away from polluting the global namespace. This has mostly been handled in the 15 commits to the node-imap branch, where we rewrite the entire application to use a more OOP view. We just need to handle threading & then we can merge to master. Implemented in #25.
gharchive/issue
2017-05-05T16:11:43
2025-04-01T04:34:15.119328
{ "authors": [ "popey456963" ], "repo": "femto-email/client", "url": "https://github.com/femto-email/client/issues/17", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
46669548
Failed to response Actually it isn't an issue to cropper itself but to the example which is bundled with the release. First I tested it on my home server and it worked fine: http://marschall.neon.org:88/cipoh/crop-avatar/crop-avatar.html but then I moved it to a server on the web, withou making any changes, and it returns "Failed to response": http://cipoh.com.br/crop-avatar/crop-avatar.html the only explanation I can think of is some missing php module. here are the two phpinfo: http://marschall.neon.org:88/cipoh/crop-avatar/info.php http://cipoh.com.br/crop-avatar/info.php could you give me a hint? Hi, a question, how do I upload multiple images? @nivram-gt, this is not a plugin for uploading images. There are tutorials to help you learn how to do that. @peterchibunna It's what I want say. Thanks!
gharchive/issue
2014-10-23T20:01:11
2025-04-01T04:34:15.125325
{ "authors": [ "femarschall", "fengyuanchen", "nivram-gt", "peterchibunna" ], "repo": "fengyuanchen/cropper", "url": "https://github.com/fengyuanchen/cropper/issues/83", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2304823922
Update image ghcr.io/zoriya/kyoo_front to 42f7541 This PR contains the following updates: Package Update Change ghcr.io/zoriya/kyoo_front digest 7d1eb7d -> 42f7541 Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 Ignore: Close this PR and you won't be reminded about this update again. [ ] If you want to rebase/retry this PR, check this box This PR has been generated by Renovate Bot. Edited/Blocked Notification Renovate will not automatically rebase this PR, because it does not recognize the last commit author and assumes somebody else may have edited the PR. You can manually request rebase by checking the rebase/retry box above. ⚠️ Warning: custom changes will be lost.
gharchive/pull-request
2024-05-19T23:15:58
2025-04-01T04:34:15.131262
{ "authors": [ "fenio" ], "repo": "fenio/homelab", "url": "https://github.com/fenio/homelab/pull/189", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2344185480
Update chart external-dns to 1.14.5 This PR contains the following updates: Package Update Change external-dns patch 1.14.4 -> 1.14.5 Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 Ignore: Close this PR and you won't be reminded about this update again. [ ] If you want to rebase/retry this PR, check this box This PR has been generated by Renovate Bot. Edited/Blocked Notification Renovate will not automatically rebase this PR, because it does not recognize the last commit author and assumes somebody else may have edited the PR. You can manually request rebase by checking the rebase/retry box above. ⚠️ Warning: custom changes will be lost.
gharchive/pull-request
2024-06-10T15:17:34
2025-04-01T04:34:15.136630
{ "authors": [ "fenio" ], "repo": "fenio/homelab", "url": "https://github.com/fenio/homelab/pull/250", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2674300241
[Feature] "End" Line Selector Currently, line selectors need a start and end line number to work. /** * @includeExample path/to/your/example.ts:1-5,10 */ function myFunction() { // ... } However, it would be much easier to have an end-line selector that ends at the end of the file. For example, the example below will only selects lines starting from 2 to the end of the file. This makes it easier on the dev to maintain these Typedocs, as the file they are referring might expand in the future, but they still want to exclude a beginning few lines, for whatever reason. /** * @includeExample path/to/your/example.ts:2- */ function myFunction() { // ... } Good idea, I didn't find an official notation for selecting lines in a file, but it does look like the good way. You can implement it, or I'll do it in an upcoming version. I've never contributed to open source before, but I would love to give it a shot! You mentioning about an official notation got me thinking about a more expressive "line selection" syntax. Please see #9 for my thoughts on this!
gharchive/issue
2024-11-20T04:24:11
2025-04-01T04:34:15.149012
{ "authors": [ "ferdodo", "spenpal" ], "repo": "ferdodo/typedoc-plugin-include-example", "url": "https://github.com/ferdodo/typedoc-plugin-include-example/issues/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1955058316
Added serverless AI tutorial for beginners Content must go through a pre-merge checklist. TO DO: Create sample code in the ai-examples folder. Pre-Merge Content Checklist This documentation has been checked to ensure that: [ ] The title, template, and date` are all set [ ] Does this PR have a new menu item (anywhere in templates/*.hbs files) that points to a document .md that is set to publish in the future? If so please only publish the .md and .hbs changes in real-time (otherwise there will be a menu item pointing to a .md file that does not exist) [ ] File does not use CRLF, but uses plain LF (hint: use cat -ve <filename> | grep '^M' | wc -l and expect 0 as a result) [ ] Has passed bart check [ ] Has been manually tested by running in Spin/Bartholomew (hint: use PREVIEW_MODE=1 and run npm run styles to update styling) [ ] Headings are using Title Case [ ] Code blocks have the programming language set to properly highlight syntax and the proper copy directive [ ] Have tested with npm run test and resolved all errors [ ] Relates to an existing (potentially outdated) blog article? If so please add URL in blog to point to this content. Hi @sohanmaheshwar Just a quick note, the CI does not allow multiple line spacing. You can test this on your local machine using the following procedure. cd developer cd spin-up-hub npm install cd ../ npm install npm install spin build The above gets the environment ready for testing and viewing locally. The following command will run the CI test locally: npm run test The following command will allow you to view the site on locahost:3000 spin up -e "PREVIEW_MODE=1" Once the npm run test passes you can push another commit. this Build (CI) is failing. TIL about running the CI test locally. Thanks @tpmccallum Made the changes The tabs are not showing up but the formatting is the same as the other posts. Does it show up only after it is deployed? Or is something missing? Added the code samples in Python, Rust and TS here: https://github.com/fermyon/ai-examples/pull/28 I'd like someone to review at the Rust one, and ensure the code is clean. It had a warning when i compiled. I hit the AI service limits so couldn't test further Just FYI @sohanmaheshwar I deployed the Rust example to Serverless AI and it worked.
gharchive/pull-request
2023-10-20T21:55:35
2025-04-01T04:34:15.157514
{ "authors": [ "sohanmaheshwar", "tpmccallum" ], "repo": "fermyon/developer", "url": "https://github.com/fermyon/developer/pull/942", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
146808838
Ability to select/deselect files Didn't find an issue (maybe I missed it?) It'd be a nice feature to have. Agreed.
gharchive/issue
2016-04-08T04:16:29
2025-04-01T04:34:15.160093
{ "authors": [ "DiegoRBaquero", "feross" ], "repo": "feross/webtorrent-desktop", "url": "https://github.com/feross/webtorrent-desktop/issues/360", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
146233917
Add File > Quit for Linux users with broken system trays Works around #303 This at least makes it possible for users of... unusual Linux distributions to quit the app without having to use kill
gharchive/pull-request
2016-04-06T08:34:03
2025-04-01T04:34:15.161407
{ "authors": [ "dcposch" ], "repo": "feross/webtorrent-desktop", "url": "https://github.com/feross/webtorrent-desktop/pull/321", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
594577957
Update for new harvest craft version Pams harvest craft updated so this mod looks for an older version. will there be any update? minecraft 1.12.2 Probably some time. Not yet known when. This works fine with the new version
gharchive/issue
2020-04-05T17:54:20
2025-04-01T04:34:15.165128
{ "authors": [ "Invidious", "SkyHawkB", "ferreusveritas" ], "repo": "ferreusveritas/DynamicTrees-PHC", "url": "https://github.com/ferreusveritas/DynamicTrees-PHC/issues/10", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
194881039
大众点评数据采集重复项 您好: 学习了大众点评的数据采集视频,但是在最终采集的数据表中,发现极大多数的数据重复项,请问这可能是哪种情况造成的? 多谢 请把你的工程文件,发到我的邮箱,或者直接在该问题上贴出来,这个应该是你某一步配置的问题。 Hawk是不负责检查配置合理性的。
gharchive/issue
2016-12-12T03:02:17
2025-04-01T04:34:15.168206
{ "authors": [ "ferventdesert", "shirleymars" ], "repo": "ferventdesert/Hawk", "url": "https://github.com/ferventdesert/Hawk/issues/23", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
727643219
Show and index endpoints can be used as JSON api Added some tests as well 😄 @zcotter Released as 0.3.0
gharchive/pull-request
2020-10-22T18:57:58
2025-04-01T04:34:15.173462
{ "authors": [ "reneklacan", "zcotter" ], "repo": "fetlife/rollout-ui", "url": "https://github.com/fetlife/rollout-ui/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
682639521
object parser with additional fields Would you accept a PR for a object' function which would not validate that all fields are consumed like in the current object function? Or maybe am I missing a good way to deal with this use case? Sure. I can't think of a good name for that function though. Maybe add an argument to the object function, which would have an algebraic sum type with two constructors (check/don't check)? How to name the type and constructors would also need some thinking. Why not something like object/objectLenient? This way: there's only one name to find we don't change the meaning of the existing object function we use lenient as an indicator that we are less strict than the object function First, I don't think objectLenient is descriptive enough. With an argument, we can do something like data ExtraFields = ExtraFieldsAllowed | ExtraFieldsNotAllowed which conveys more meaning than just "lenient". Second, it scales better. If tomorrow we need another option to modify the behaviour of object, we'd have 4 different functions which do more or less the same with slight variations. Well that works for me :-). Do you want to do it or you prefer that I make a PR? Yes, please make a PR. Here it is: https://github.com/feuerbach/yaml-combinators/pull/8 Thanks, I actually thought about a similar solution at some point but forgot to mention it :-)
gharchive/issue
2020-08-20T11:28:52
2025-04-01T04:34:15.178817
{ "authors": [ "etorreborre", "feuerbach", "symbiont-eric-torreborre" ], "repo": "feuerbach/yaml-combinators", "url": "https://github.com/feuerbach/yaml-combinators/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
33548361
文件上传之前的MD5校验 请问文件上传之前的MD5校验如何做? 有相关的API提供么? beforeSend: function(block){ //分片验证是否已传过,用于断点续传 var task = $.Deferred(); (new WebUploader.Uploader()).md5File(block.blob).progress(function(percentage){ console.log(percentage); }).then(function(val){ alert(val); userInfo.md5 = val; md5 = val; block.chunkMd5 = val; console.log("chunkcheck md5:"+val); $.ajax({ type: "POST", dataType:"text/json", // url: backEndUrl , url: "http://localhost:8080/clouddisk/admin/files/chunkCheck", data: { status: "chunkCheck", // name: uniqueFileName, chunkIndex: block.chunk, size: block.end - block.start, md5: val } , cache: false , timeout: 1000 //todo 超时的话,只能认为该分片未上传过 , dataType: "json" }).then(function(data, textStatus, jqXHR){ console.log(data); console.log("chunkCheck:"+data.md5); if(data.ifExist){ //若存在,返回失败给WebUploader,表明该分块不需要上传 task.reject(); }else{ task.resolve(); } }, function(jqXHR, textStatus, errorThrown){ //任何形式的验证失败,都触发重新上传 task.resolve(); }); return $.when(task); }); } , afterSendFile: function(file){ var chunksTotal = 0; if((chunksTotal = Math.ceil(file.size/chunkSize)) > 1){ //合并请求 var task = new $.Deferred(); $.ajax({ type: "POST", dataType:"text/json", // url: backEndUrl , url: "http://localhost:8080/clouddisk/admin/files/chunksMerge" , data: { status: "chunksMerge" , // name: uniqueFileName, chunks: chunksTotal, ext: file.ext, md5: md5Mark } , cache: false , dataType: "json" }).then(function(data, textStatus, jqXHR){ //todo 检查响应是否正常 task.resolve(); file.path = data.path; UploadComlate(file); }, function(jqXHR, textStatus, errorThrown){ task.reject(); }); return $.when(task); } else{ UploadComlate(file); } } }); var uploader = WebUploader.create({ swf: "Uploader.swf" , server: backEndUrl , pick: "#picker" , resize: false , dnd: "#theList" , paste: document.body , disableGlobalDnd: true , thumb: { width: 100 , height: 100 , quality: 70 , allowMagnify: true , crop: true //, type: "image/jpeg" } 请问如何把分片的chunkcheck 的md5值 ,传给backurl上传文件的方法里????
gharchive/issue
2014-05-15T01:32:31
2025-04-01T04:34:15.194467
{ "authors": [ "bisu328", "zhaoliuzi" ], "repo": "fex-team/webuploader", "url": "https://github.com/fex-team/webuploader/issues/180", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1315775799
Assembly Project compile should target the NES work path directly Currently, a batch file is used internally to compile the NES image. However, it targets a temp file in the asm/ folder, then copies that to the main directory. Since the batch file takes a full pathname to a file as a parameter, pass the work ROM file path to the batch file instead of 'ff1.nes'. I think the reason for this is that the FF1.nes file can hang around, allowing it to be inadvertently included in zip files. If that's the case, then instead of changing how the batch file works, we could just ensure that we delete FF1.nes no matter the result of the compilation.
gharchive/issue
2022-07-24T00:38:02
2025-04-01T04:34:15.198153
{ "authors": [ "essellejaye", "ffhacksterex" ], "repo": "ffhacksterex/FFHacksterEx", "url": "https://github.com/ffhacksterex/FFHacksterEx/issues/33", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1383944297
is just for streaming ? ffmpeg -i mp3 -i mp4 c copy outmp4 rendering it has this / it can do this ? This is a decoding library, so there is currently no encode/transcode functionality. but all this stuf exist in uwp media transcode what it has Well, technically you could use FFmpegInteropX for decoding, and pass the MediaStreamSource to the UWP transcoding APIs. Some people have done that. But you are limited in the encoding/transcoding target formats to what UWP has to offer. You cannot use FFmpeg stuff for that. it can not decode / deserilize the cluster list from binary which ffmpeg can , I am not sure I understand your scenario. If you want to do transcoding, you could use MediaTranscoder: https://learn.microsoft.com/en-us/uwp/api/windows.media.transcoding.mediatranscoder.preparemediastreamsourcetranscodeasync?view=winrt-22621 var ffmpegSource = await FFmpegMediaSource.Create... var mediaSource = ffmpegSource.GetMediaStreamSource(); MediaTranscoder.PrepareMediaStreamSourceTranscodeAsync(mediaSource, outputStream, encodingProfile); What do you mean with cluster list? here i just see it just only can extract thumbnails , and decode only just few track entry , there is lot of decode info exist clusterlist mean the audio or video file have truck position hex address list each address contains of data[] byte.lenth var ffmpegSource = await FFmpegMediaSource.Create... var mediaSource = ffmpegSource.GetMediaStreamSource(); MediaTranscoder.PrepareMediaStreamSourceTranscodeAsync(mediaSource, outputStream, encodingProfile); i tested it , its good , but i want something new in uwp , here everything uwp have in https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/transcode-media-files if possible please try to give the Matroska.Muxer Perhaps we should consider transcoding. This seems to be in popular demand. and its been long years no one success in this part , many people tried It sure is popular. But it means loads of work. It is not only about designing a new good and flexible API surface (decoding, encoding, trancoding, probably also custom raw data feeding) and implementing all the FFmpeg interop stuff. For encoding, you also need to add many new libs. At least x264, x265, av1 and probably a bunch more, to cover the popular formats. There are not many encoders included in core FFmpeg. Then people will want GPU accelerated encoding, so you have to add NVEnc, QuickSync, AMD whatever they call it, because there is no commin API for video encoding. Adding new libs is always a pain, because the MSVC toolchain is often not properly supported by the lib's build system. It is especially the lib part that makes me not want to do this. Writing code can be fun, especially if you create something new from scratch. But the fun factor of fighting with build systems is more like zero for me. Additionally, encoding is incredibly complex. If you take a look at popular encoders such as Handbrake, there are loads of config options for every single codec, and every codec has totally different configuration. You cannot simplify this down to a "quality" slider. And if you did, some people would not be happy about the outcome, because every file is different and would need different settings for best quality / file size offtrade. So you'd probably need both, some kind of simplified config scheme for people who "just want to encode", but also an advanced config scheme for all the codec specific settings. Personally, I don't have any use for video encoding in my projects. So it is loads of (at least partly unpleasant) work for something I don't need at all. If someone would properly pay me for this, I'd probably do it, but I guess that's not the case ^^ I would support this if you want to do all the heavy lifting @brabebhin, but I do not see myself putting a lot of work into this. It's just too much work. In Germany we call this a "barrel without a bottom" ^^ I could be wrong though, sometimes things are easier than what they seem. But I am not very keen on trying right now. Don't we have av1, x264 and x265 already though? We have an av1 decoder, but no encoder. And yes, I have added x264 and x265 encoders a year back or so at an experimental basis. But these are desktop builds currently, and I know I had to do patching at multiple places to get these working. I don't have the patches anymore, and I also don't really remember what the problems were. So there is still some work for them, although they are kind of pre-integrated already. But more libs will sure be required. So for the API part, I'd just support the filter syntax, with input and output stream objects. As for libraries, yeah, that wound be the biggest issue. Linking libraries is always a PITA to deal with. I'd assume most of those we need don't even support uwp builds. This does not even allow you to set the output format. It is only sample code to show how transcoding can be done in general. I don't think that this will help anyone. If you want to provide encoder/transcoder functionality, you must allow configuration of the output format and settings (per stream). And it should be flexible enough to support different use cases. Like having a source stream and a target URI, or source URI and target stream. People have been asking about that already (encode data for sending through rtsp), so if we provide something, it should be flexible enough to support these scenarios. Most probably you will also want to support multiple sources, like raw video stream plus raw audio stream, and encode to one target stream, and you might want to be able to select which of the input streams are encoded and which are dropped. I agree that a simplified API is also desired, where you just have input stream and output stream, and set video and audio options (being applied to all corresponding streams) and target format options. But if you provide this, people will quickly want more. So I think, if we provide anything in that direction, it should be flexible to cover all the use cases, otherwise we'll have to redo everything quickly. Setting codec options in FFmpeg happens through AVDictionary, so we could use a generic PropertySet (like we have for FFmpegOptions) per-stream, and additonally a filter string if filtering is required. And then one more PropertySet for target container format options. That way, the configuration would be generic and people just have to use the known and documented options from ffmpeg. By the way, I just noticed that x264 and x265 are GPL. So it is basically not possible to use them in a product, unless the product itself is open source. This makes the two most popular encoders unavailable for most apps. AV1 encoder libs are BSD. can u take look at this project https://github.com/StefH/Matroska it can cast the hex code in right place but this project is not full code So I am guessing Microsoft implements its own encoders then. Yeah, the licensing is a show stopper... Unless we can work around it by using the DirectX encoders (haven't researched much in this area). But yeah, a lot of work. We can use GPU encoding, but only where the hardware supports it. DX12 encoding is pure HW encoding, so it also does not work where not available. And I would not really go that route, it is pretty new and it currently only supports two formats. I'd rather use HW encoding through FFmpeg. It's just that this requires additional libs.
gharchive/issue
2022-09-23T15:04:50
2025-04-01T04:34:15.214877
{ "authors": [ "brabebhin", "developerss", "lukasf" ], "repo": "ffmpeginteropx/FFmpegInteropX", "url": "https://github.com/ffmpeginteropx/FFmpegInteropX/issues/305", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1555883186
Feat automating scrapping to database Descrição Código para mandar os dados gerados direto para o banco de dados. Revisão [x] O Pull Request está vinculado a apenas um assunto. [x] O título está objetivo. [x] A descrição está gramaticalmente correta. [x] A branch está direcionada para a main. [x] O revisor foi selecionado corretamente. Está dentro dos conformes.
gharchive/pull-request
2023-01-25T00:39:47
2025-04-01T04:34:15.230467
{ "authors": [ "FelipeNunesdM", "pedrobarbosaocb" ], "repo": "fga-eps-mds/2022-2-QuantiFGA", "url": "https://github.com/fga-eps-mds/2022-2-QuantiFGA/pull/148", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
324632956
Implementar Teste das Views Imlementar Teste das Views Descrição Implementar testes das views Critérios de Aceitação [ ] Teste PyAPI [ ] Teste de busca na página inicial Issue duplicada
gharchive/issue
2018-05-19T14:17:37
2025-04-01T04:34:15.232107
{ "authors": [ "daluzguilherme" ], "repo": "fga-gpp-mds/2018.1-Cardinals", "url": "https://github.com/fga-gpp-mds/2018.1-Cardinals/issues/131", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1949751305
try again here hi @yejiyang, you should try to use this repo again now. @fgregg Thanks for helping out. I tested your branch no_limit_csv_publish and it works great now 🚀🚀🚀. I am curious what's your plan for the future of this branch. I realized you routed it back to 1.0a1. Do you plan to sync it with the latest datasette main branch? I think your approach here - full data rows in csv & limit rows in table is a great idea. To push back to the upstream would be the best scenario in my opintion. i may, but have no plans. simon is choosy about what he brings in. @fgregg Hey, I saw you updated this branch no_limit_csv_publish. Does it work on your side? yep! On Fri, Dec 1, 2023 at 7:25 AM Jiyang Ye @.***> wrote: @fgregg https://github.com/fgregg Hey, I saw you updated this branch no_limit_csv_publish. Does it work on your side? — Reply to this email directly, view it on GitHub https://github.com/fgregg/datasette/issues/1#issuecomment-1836036165, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEDC3KLOHILXP4UBAZFUMDYHHEDHAVCNFSM6AAAAAA6FQVMW2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZWGAZTMMJWGU . You are receiving this because you were mentioned.Message ID: @.***> @fgregg Hei, I tried you branch no_limit_csv_publish again, with the templates from here . I got this error. Do you know if I should also update the templates somewhere? 2023-12-11 12:29:09 INFO: 172.23.0.1:57784 - "GET /-/static/sql-formatter-2.3.3.min.js HTTP/1.1" 200 OK 2023-12-11 12:29:14 Traceback (most recent call last): 2023-12-11 12:29:14 File "/usr/local/lib/python3.9/site-packages/datasette/app.py", line 1632, in route_path 2023-12-11 12:29:14 response = await view(request, send) 2023-12-11 12:29:14 File "/usr/local/lib/python3.9/site-packages/datasette/app.py", line 1814, in async_view_fn 2023-12-11 12:29:14 response = await async_call_with_supported_arguments( 2023-12-11 12:29:14 File "/usr/local/lib/python3.9/site-packages/datasette/utils/__init__.py", line 1016, in async_call_with_supported_arguments 2023-12-11 12:29:14 return await fn(*call_with) 2023-12-11 12:29:14 File "/usr/local/lib/python3.9/site-packages/datasette/views/table.py", line 673, in table_view 2023-12-11 12:29:14 response = await table_view_traced(datasette, request) 2023-12-11 12:29:14 File "/usr/local/lib/python3.9/site-packages/datasette/views/table.py", line 822, in table_view_traced 2023-12-11 12:29:14 await datasette.render_template( 2023-12-11 12:29:14 File "/usr/local/lib/python3.9/site-packages/datasette/app.py", line 1307, in render_template 2023-12-11 12:29:14 return await template.render_async(template_context) 2023-12-11 12:29:14 File "/usr/local/lib/python3.9/site-packages/jinja2/environment.py", line 1324, in render_async 2023-12-11 12:29:14 return self.environment.handle_exception() 2023-12-11 12:29:14 File "/usr/local/lib/python3.9/site-packages/jinja2/environment.py", line 936, in handle_exception 2023-12-11 12:29:14 raise rewrite_traceback_stack(source=source) 2023-12-11 12:29:14 File "/usr/local/lib/python3.9/site-packages/jinja2/environment.py", line 1321, in <listcomp> 2023-12-11 12:29:14 [n async for n in self.root_render_func(ctx)] # type: ignore 2023-12-11 12:29:14 File "/mnt/templates/table.html", line 1, in top-level template code 2023-12-11 12:29:14 {% extends "base.html" %} 2023-12-11 12:29:14 File "/usr/local/lib/python3.9/site-packages/datasette/templates/base.html", line 62, in top-level template code 2023-12-11 12:29:14 {% block content %} 2023-12-11 12:29:14 File "/mnt/templates/table.html", line 24, in block 'content' 2023-12-11 12:29:14 <div class="page-header" style="border-color: #{{ database_color(database) }}"> 2023-12-11 12:29:14 TypeError: 'str' object is not callable 2023-12-11 12:29:14 INFO: 172.23.0.1:57822 - "GET /zeropm-v0-0-3/api_services HTTP/1.1" 500 Internal Server Error it’s working for me. did you manually set the database color anywhere? It’s working for me. did you manually set the database color anywhere? I used a metadata.yml file, which looks like this. Thanks for the quick reply. No color change. I
gharchive/issue
2023-10-18T13:23:42
2025-04-01T04:34:15.253014
{ "authors": [ "fgregg", "yejiyang" ], "repo": "fgregg/datasette", "url": "https://github.com/fgregg/datasette/issues/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
980789004
Dockerize and create deploy button for Vercel support Would be a great idea to make this a 2 step solution. You click the deploy button on the readme and it will spin up an instance of satdress for you given certain criteria (aka domain). Something for the future, but putting there so we don't forget. Could be Herkou as well! https://vercel.com/docs/more/deploy-button @andrerfneves this would be indeed greatz We can't do it on Vercel because Vercel only runs Nodejs stuff, right? Heroku should work though, and there are probably other Heroku competitors these days.
gharchive/issue
2021-08-27T01:44:36
2025-04-01T04:34:15.282615
{ "authors": [ "andrerfneves", "fiatjaf", "whiteyhat" ], "repo": "fiatjaf/satdress", "url": "https://github.com/fiatjaf/satdress/issues/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1141140291
Simulator usage - train and test split for target encoded features to avoid leakage Hello again dear authors, Firstly I would like to say that I purchased your paper and it helped quite a lot to gain the "context" of the package and motivated me to go deeper. Now going deeper I was thinking to use the Simulator class to find the optimal model and HP, however, since I am working on contextual bandit there a lot of feature encoding is involved. For example, I am utilizing target encoding on my categorical features, for that, I split the data into a train-test and train my encoder on the train set, then apply this encoder on the test set (targets from the test part are not involved in the encoder training process to avoid leakage). Now, looking at the Simulator class I can only see that the whole dataset can be fed, and then inside the package, the train-test split is going to happen. However, in this case, I would need to target encode my features based on the whole dataset but it would create a leakage since they suppose to be an unknown target on the test set that will be used by the encoder...Therefore my question would be if you could give me a hint on how I can overcome this problem? I hope my explanation is not too messy and again I will be helpful with any kind of advice on that matter. Dear mrStasSmirnoff, Using appropriate train and test datasets that avoid information leakage is indeed an important consideration. Since the Simulator is used to evaluate a range of bandit algorithms and parameter configurations one option is to only use your “train” split for this purpose. You can think of the test data used by the Simulator as an additional validation set. Once you’ve identified a specific bandit configuration you can then evaluate this separately on your test set. This should allow you to perform the necessary preprocessing outside of the Simulator class, while maintaining a robust train/test protocol.
gharchive/issue
2022-02-17T10:18:34
2025-04-01T04:34:15.287675
{ "authors": [ "bkleyn", "mrStasSmirnoff" ], "repo": "fidelity/mabwiser", "url": "https://github.com/fidelity/mabwiser/issues/45", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2213496479
FileUpload field causing ErrorException: foreach() argument must be of type array|object, string given on update page Package filament/filament Package Version v3.2.61 Laravel Version v11.1.0 Livewire Version v.3.0.0 PHP Version PHP 8.3.4 Problem description I installed fresh laravel v11.1.0 and install filament and spatie settings to save my site settings, I follow all the steps to make it running smoothly. I made GeneralSiteSettings page on my panel and using FileUpload::make('logo') to upload my logo to spatie settings, but after setting saved, my page returning error foreach() argument must be of type array|object, string given, for complete error log please visit flare error here [here](https://flareapp.io/share/353KwXo5) and here the screen capture of the error: Expected behavior FileUpload should be able to handle single file upload on any page and any condition (create or update) Steps to reproduce Install fresh laravel Install fresh filament and configure it Install spatie settings from official repo https://github.com/spatie/laravel-settings and follow the instruction to configure it properly Create a simple settings for name, description and logo create a blank filament page using command php artisan make:filament-page GeneralSiteSettings on GeneralSiteSettings.php, create forms with getFormSchema function, add some form input especially FileUpload::make('logo')->image() to upload our logo Here is my complete function: <?php namespace App\Filament\Pages; use App\Settings\SiteSettings; use Filament\Forms\Components\FileUpload; use Filament\Forms\Components\Section; use Filament\Forms\Components\Split; use Filament\Forms\Components\TextInput; use Filament\Notifications\Notification; use Filament\Pages\Page; class GeneralSiteSettings extends Page { protected static ?string $navigationIcon = 'heroicon-o-document-text'; protected static string $view = 'filament.pages.general-site-settings'; public ?array $data = []; public function mount(SiteSettings $settings) { $this->data = $settings->toArray(); } public function getFormSchema(): array { return [ Split::make([ Section::make()->schema([ FileUpload::make('logo') ->disk('public') ->image(), ]), Section::make()->schema([ TextInput::make('name')->required(), TextInput::make('description'), ]) ])->statePath('data') ]; } public function save(SiteSettings $settings) { $data = (object)$this->form->getState()['data']; $settings->name = $data->name; $settings->description = $data->description; $settings->logo = $data->logo; if($settings->save()) { return Notification::make() ->success() ->title('Site settings updated') ->body('Settings updated succesfully')->send(); } return Notification::make() ->danger() ->title('Failed to save settings') ->body('Your settings data failed to update') ->send(); } } Reproduction repository https://github.com/abanghendri/filamentBugReport Relevant log output [2024-03-28 15:17:58] local.ERROR: foreach() argument must be of type array|object, string given {"userId":1,"exception":"[object] (ErrorException(code: 0): foreach() argument must be of type array|object, string given at /home/hendri/projects/repoFilamentBug/vendor/filament/forms/src/Components/BaseFileUpload.php:710) [stacktrace] #0 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Foundation/Bootstrap/HandleExceptions.php(256): Illuminate\\Foundation\\Bootstrap\\HandleExceptions->handleError() #1 /home/hendri/projects/repoFilamentBug/vendor/filament/forms/src/Components/BaseFileUpload.php(710): Illuminate\\Foundation\\Bootstrap\\HandleExceptions->Illuminate\\Foundation\\Bootstrap\\{closure}() #2 /home/hendri/projects/repoFilamentBug/vendor/filament/forms/src/Concerns/SupportsFileUploadFields.php(39): Filament\\Forms\\Components\\BaseFileUpload->getUploadedFiles() #3 /home/hendri/projects/repoFilamentBug/vendor/filament/forms/src/Concerns/SupportsFileUploadFields.php(47): Filament\\Forms\\ComponentContainer->getUploadedFiles() #4 /home/hendri/projects/repoFilamentBug/vendor/filament/forms/src/Concerns/SupportsFileUploadFields.php(47): Filament\\Forms\\ComponentContainer->getUploadedFiles() #5 /home/hendri/projects/repoFilamentBug/vendor/filament/forms/src/Concerns/InteractsWithForms.php(140): Filament\\Forms\\ComponentContainer->getUploadedFiles() #6 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(36): Filament\\Pages\\BasePage->getFormUploadedFiles() #7 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Container/Util.php(41): Illuminate\\Container\\BoundMethod::Illuminate\\Container\\{closure}() #8 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(93): Illuminate\\Container\\Util::unwrapIfClosure() #9 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(35): Illuminate\\Container\\BoundMethod::callBoundMethod() #10 /home/hendri/projects/repoFilamentBug/vendor/livewire/livewire/src/Wrapped.php(23): Illuminate\\Container\\BoundMethod::call() #11 /home/hendri/projects/repoFilamentBug/vendor/livewire/livewire/src/Mechanisms/HandleComponents/HandleComponents.php(467): Livewire\\Wrapped->__call() #12 /home/hendri/projects/repoFilamentBug/vendor/livewire/livewire/src/Mechanisms/HandleComponents/HandleComponents.php(99): Livewire\\Mechanisms\\HandleComponents\\HandleComponents->callMethods() #13 /home/hendri/projects/repoFilamentBug/vendor/livewire/livewire/src/LivewireManager.php(96): Livewire\\Mechanisms\\HandleComponents\\HandleComponents->update() #14 /home/hendri/projects/repoFilamentBug/vendor/livewire/livewire/src/Mechanisms/HandleRequests/HandleRequests.php(89): Livewire\\LivewireManager->update() #15 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Routing/ControllerDispatcher.php(46): Livewire\\Mechanisms\\HandleRequests\\HandleRequests->handleUpdate() #16 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Routing/Route.php(260): Illuminate\\Routing\\ControllerDispatcher->dispatch() #17 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Routing/Route.php(206): Illuminate\\Routing\\Route->runController() #18 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Routing/Router.php(806): Illuminate\\Routing\\Route->run() #19 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(144): Illuminate\\Routing\\Router->Illuminate\\Routing\\{closure}() #20 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Routing/Middleware/SubstituteBindings.php(50): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #21 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(183): Illuminate\\Routing\\Middleware\\SubstituteBindings->handle() #22 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/VerifyCsrfToken.php(88): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #23 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(183): Illuminate\\Foundation\\Http\\Middleware\\VerifyCsrfToken->handle() #24 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/View/Middleware/ShareErrorsFromSession.php(49): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #25 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(183): Illuminate\\View\\Middleware\\ShareErrorsFromSession->handle() #26 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Session/Middleware/StartSession.php(121): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #27 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Session/Middleware/StartSession.php(64): Illuminate\\Session\\Middleware\\StartSession->handleStatefulRequest() #28 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(183): Illuminate\\Session\\Middleware\\StartSession->handle() #29 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Cookie/Middleware/AddQueuedCookiesToResponse.php(37): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #30 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(183): Illuminate\\Cookie\\Middleware\\AddQueuedCookiesToResponse->handle() #31 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Cookie/Middleware/EncryptCookies.php(75): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #32 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(183): Illuminate\\Cookie\\Middleware\\EncryptCookies->handle() #33 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(119): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #34 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Routing/Router.php(805): Illuminate\\Pipeline\\Pipeline->then() #35 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Routing/Router.php(784): Illuminate\\Routing\\Router->runRouteWithinStack() #36 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Routing/Router.php(748): Illuminate\\Routing\\Router->runRoute() #37 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Routing/Router.php(737): Illuminate\\Routing\\Router->dispatchToRoute() #38 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php(200): Illuminate\\Routing\\Router->dispatch() #39 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(144): Illuminate\\Foundation\\Http\\Kernel->Illuminate\\Foundation\\Http\\{closure}() #40 /home/hendri/projects/repoFilamentBug/vendor/livewire/livewire/src/Features/SupportDisablingBackButtonCache/DisableBackButtonCacheMiddleware.php(19): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #41 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(183): Livewire\\Features\\SupportDisablingBackButtonCache\\DisableBackButtonCacheMiddleware->handle() #42 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/ConvertEmptyStringsToNull.php(27): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #43 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(183): Illuminate\\Foundation\\Http\\Middleware\\ConvertEmptyStringsToNull->handle() #44 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TrimStrings.php(46): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #45 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(183): Illuminate\\Foundation\\Http\\Middleware\\TrimStrings->handle() #46 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Http/Middleware/ValidatePostSize.php(27): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #47 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(183): Illuminate\\Http\\Middleware\\ValidatePostSize->handle() #48 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/PreventRequestsDuringMaintenance.php(110): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #49 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(183): Illuminate\\Foundation\\Http\\Middleware\\PreventRequestsDuringMaintenance->handle() #50 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Http/Middleware/HandleCors.php(49): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #51 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(183): Illuminate\\Http\\Middleware\\HandleCors->handle() #52 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Http/Middleware/TrustProxies.php(57): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #53 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(183): Illuminate\\Http\\Middleware\\TrustProxies->handle() #54 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(119): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}() #55 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php(175): Illuminate\\Pipeline\\Pipeline->then() #56 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php(144): Illuminate\\Foundation\\Http\\Kernel->sendRequestThroughRouter() #57 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Foundation/Application.php(1168): Illuminate\\Foundation\\Http\\Kernel->handle() #58 /home/hendri/projects/repoFilamentBug/public/index.php(17): Illuminate\\Foundation\\Application->handleRequest() #59 /home/hendri/projects/repoFilamentBug/vendor/laravel/framework/src/Illuminate/Foundation/resources/server.php(16): require_once('...') #60 {main} "} I have the same issue i wrote this quickfix to hold me over: Problem is that $this->getState() is returning the file name string instead of [ uuid => $fileName ] `/** * @return array<array{name: string, size: int, type: string, url: string} | null> | null */ public function getUploadedFiles(): ?array { $urls = []; if (is_string($this->getState())) { $file = $this->getState(); $fileKey = (string) Str::orderedUuid(); $callback = $this->getUploadedFileUsing; $urls[$fileKey] = $this->evaluate($callback, [ 'file' => $file, 'storedFileNames' => $this->getStoredFileNames(), ]) ?: null; return $urls; } foreach ($this->getState() ?? [] as $fileKey => $file) { if ($file instanceof TemporaryUploadedFile) { $urls[$fileKey] = null; continue; } $callback = $this->getUploadedFileUsing; if (! $callback) { return [$fileKey => null]; } $urls[$fileKey] = $this->evaluate($callback, [ 'file' => $file, 'storedFileNames' => $this->getStoredFileNames(), ]) ?: null; } return $urls; }` As per the docs, you need to use $this->form->fill() to fill the form, not setting the raw data $this->data = $settings->toArray(); // 👎 $this->form->fill($settings->toArray()); // ✅ Thanks @danharrin , but it still not work, So I try to understand what needed, in vendor / filament / forms / src / Components / BaseFileUpload.php : 709 it says: foreach ($this->getState() ?? [] as $fileKey => $file) { it means that $this->getState should be an array, then I edited my mount method on SiteSettings page to modify data['logo'] from string to array like this: public function mount(SiteSettings $settings) { $this->data = $settings->toArray(); if($settings->logo) { $this->data['logo'] = [$settings->logo]; } } and it works. As per the docs, you need to use $this->form->fill() to fill the form, not setting the raw data $this->data = $settings->toArray(); // 👎 $this->form->fill($settings->toArray()); // ✅ Any chance you have a link to the specific docs page? https://filamentphp.com/docs/3.x/forms/adding-a-form-to-a-livewire-component#initializing-the-form-with-data I am also faced this issue. public function mount(SiteSettings $settings) { $this->data = $settings->toArray(); if (isset($this->data['logo']) && !is_null($this->data['logo'])) { $this->data['logo'] = ['name' => $this->data['logo']; } } For me I have done something like this and it's worked
gharchive/issue
2024-03-28T15:18:14
2025-04-01T04:34:15.318616
{ "authors": [ "CWSPS154", "MZMohamed", "abanghendri", "danharrin", "sofian-io" ], "repo": "filamentphp/filament", "url": "https://github.com/filamentphp/filament/issues/12076", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2302991518
try to import a file larger than 1MB, a validation error will occur. Package filament/filament Package Version v3.2.80 Laravel Version v11.7.0 Livewire Version v3.4.12 PHP Version PHP8.3.7 Problem description If you try to import a file larger than 1MB, a validation error occurs and the import fails. This is likely a validation error on the front end from Livewire or Filepond, but the exact cause is unclear. This occurs when a file is selected or dragged and dropped. The importer has a feature to specify the maximum number of rows with maxRows(), but there is no feature to specify the file size. It would be very useful to have control similar to a maxSize() feature. Please consider this. Expected behavior Should be imported Steps to reproduce On the /admin/patients screen, perform the import using ./storage/export-1-patients_over1M.csv. Reproduction video: https://github.com/filamentphp/filament/assets/28666304/6207545e-4b40-4d35-9764-ea2f566e3b13 Reproduction repository https://github.com/bzy107/filament-select-bug Relevant log output No response check config/livewire.php -> temporary_file_upload -> 'max:15360', check config/livewire.php -> temporary_file_upload -> 'max:...', I tried it, but it didn't work. The same error occurs when the size exceeds 1MB. So, it seems to be caught by the validation before it even reaches livewire.php, but I don't understand the lifecycle well enough to pinpoint the exact issue. when 'rules' => ['max:1024']: when 'rules' => ['max:1023']: File Upload size validation is handled by the minSize and maxSize functions on the form component: https://filamentphp.com/docs/3.x/forms/fields/file-upload#file-size-validation This is because FilePond, the JS package used for file uploads, has its own validation for file size that filament hooks into. However, with the FileUpload component being baked into the Import Action, there's no easy way to modify these values yourself. The max filesize should automatically be set to null if it isn't provided automatically, which would make me assume that any size would work. Can you check your phpinfo(); for your upload_max_filesize ? Where values are set within the blade component https://github.com/filamentphp/filament/blob/81949c0cdef974d3e57afd270f2dabd4f0da4cbe/packages/forms/resources/views/components/file-upload.blade.php#L72-L73 Where the FileUpload component is made in the ImportAction https://github.com/filamentphp/filament/blob/81949c0cdef974d3e57afd270f2dabd4f0da4cbe/packages/actions/src/Concerns/CanImportRecords.php#L80-L83 Thank you for your reply. I have already checked the same points. Since $getMinSize in FilePond is not set, it should be null. upload_max_filesize is also set to 2MB, so that’s not the issue. Therefore, I’m stuck. Progress Update: I modified the CanImportRecords.php file to add maxSize(), but it didn't work. Further investigation is needed. maxSize set 100MB: I found the cause. It was not on the Filament side but on the Nginx side, where I overlooked a 413 Request Entity Too Large error. I resolved it by adding the client_max_body_size setting. server { client_max_body_size 10M; .... I hope this helps anyone who encounters the same issue.
gharchive/issue
2024-05-17T15:11:31
2025-04-01T04:34:15.333311
{ "authors": [ "aSeriousDeveloper", "bzy107", "petrisorcraciun" ], "repo": "filamentphp/filament", "url": "https://github.com/filamentphp/filament/issues/12842", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1278006057
Dropdown filter misbehaviour during table update Package filament/filament Package Version 2.13.7 Laravel Version 9.17.0 Livewire Version 2.10.5 PHP Version 8.1.7 Bug description Hello all, I'm not sure this is a bug, I'm experimenting an odd behaviour on dropdown filters (SelectFilter or MultiselectFilter) when working with a heavy table (lots of markup). The problem is that if you're typing on the dropdown text area to search for an item while the table is still reloading, once the reload is finished the dropdown seems to reset and the search results go away, you're shown the default items (the ones that appear when you click on the dropdown). In this gif you can see that the user search for the option "cuatro" by writing "cuatr", and he is shown the two coincidences "cuatro" and "tres", but when the reload process ends the drowdown is reset and shows all the options (see screenshot below) Screenshot after the table finishes reloading: I think that the desired behaviour is that the dropdown search doesn't reset and shows only the matching results. Moved from #2844 Steps to reproduce No response Relevant log output No response I cannot replicate this, so please create a repo with a seeder where I can test the behaviour. If it's no longer an issue, please close this thread. Closing due to inactivity and no reproduction repository.
gharchive/issue
2022-06-21T07:24:55
2025-04-01T04:34:15.338671
{ "authors": [ "danharrin", "underdpt", "zepfietje" ], "repo": "filamentphp/filament", "url": "https://github.com/filamentphp/filament/issues/2858", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1893142683
Identifier 'collapsedGroups' has already been declared - Console errors using new $panel->spa() method Package filament/filament Package Version v3.0.47 Laravel Version v10.22.0 Livewire Version v3.0.2 PHP Version PHP 8.1 Problem description Adding spa() method to the panel causes console error. I think it comes from recently added public/js/filament/tables/components/table.js Expected behavior No console error Steps to reproduce class AdminPanelProvider extends PanelProvider { public function panel(Panel $panel): Panel { return $panel // [...] ->spa(); } } then npm run dev Reproduction repository Private repo only, sorry Relevant log output Uncaught SyntaxError: Identifier 'collapsedGroups' has already been declared at swapCurrentPageWithNewHtml (livewire.js?id=51f84ddf:5742:19) at livewire.js?id=51f84ddf:5895:11 at preventAlpineFromPickingUpDomChanges (livewire.js?id=51f84ddf:5934:5) at livewire.js?id=51f84ddf:5893:9 at prefetches.<computed>.whenFinished (livewire.js?id=51f84ddf:5537:9) at storeThePrefetchedHtmlForWhenALinkIsClicked (livewire.js?id=51f84ddf:5523:11) at livewire.js?id=51f84ddf:5879:11 at livewire.js?id=51f84ddf:5516:7 Having the same issue with class AdminPanelProvider extends PanelProvider { public function panel(Panel $panel): Panel { return $panel // [...] ->spa(); } } I have the same error. Removing the spa() method does not fix the bug. This error causes a cascade of other JavaScript errors. Same for me, but only if SPA enabled. any timeline on when this issue will be fixed? Same for me, but only if SPA enabled. Same issue here when I chained spa() method on Panel object, removed the spa() issue gone. Same here, only with spa()
gharchive/issue
2023-09-12T19:40:24
2025-04-01T04:34:15.344294
{ "authors": [ "CharlieEtienne", "DannyV90", "MGeurts", "Nelh", "abrahamgreyson", "kedniko", "qzmenko", "uwascan" ], "repo": "filamentphp/filament", "url": "https://github.com/filamentphp/filament/issues/8439", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1740714055
Fix issue with page scroll up to the top after change value of any Fields or Create,Save button Inside Tab Component. This issuse appear when i use tab. ( Select Field,Create,Save,... ) whene changing or editing any data in the input field the page view goes up to the start of the page . i want to prevent the page view from changing after changing the data entry on the input field. Solution : I added wire:ignore.self to the parent div of tab component,becouse wire:ignore.self only ignores the DOM element it is attached to,and all children element and fields work perfectly. [ ] Changes have been thoroughly tested to not break existing functionality. [ ] New functionality has been documented or existing documentation has been updated to reflect changes. [ ] Visual changes are explained in the PR description using a screenshot/recording of before and after. Thanks The following issue also occurs on the wizard component of the filamentphp project by adding wire:ignore.self to wizard.blade.php this issue can be resolved for it as well. @danharrin Can you PR it then? yes i will add a PR.
gharchive/pull-request
2023-06-05T01:13:37
2025-04-01T04:34:15.348137
{ "authors": [ "danharrin", "ibrahimBougaoua", "rafayrty" ], "repo": "filamentphp/filament", "url": "https://github.com/filamentphp/filament/pull/6692", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1093114636
Client Allocation Request for: 1 Client Allocation Questions Core Information Name: 1 Website / Social Media: abc.com Region: North America DataCap Requested: 1PiB Addresses to be Notarized: f0127595 Notary Requested: MasaakiNawatani @zhuhukun Please subscribe to notifications for this Issue to be aware of updates. Notaries may request additional information on the Issue. This seems like a test ticket. Closing this.
gharchive/issue
2022-01-04T08:20:35
2025-04-01T04:34:15.350971
{ "authors": [ "MasaakiNawatani", "zhuhukun" ], "repo": "filecoin-project/filecoin-plus-client-onboarding", "url": "https://github.com/filecoin-project/filecoin-plus-client-onboarding/issues/1348", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
960456074
Client Allocation Request for: PDX Client Allocation Questions Core Information Name: PDX Website / Social Media: www.onbuff.com Region: Asia excl. Greater China DataCap Requested: 10TiB Addresses to be Notarized: f3w6wjjbelc2hcp2uptm4avwv7lfqc4r7cna77kf5giyattcffkdlbfxh53efsafiuec5jqyfrcf3sxet4hv5a Notary Requested: Broz221 @Prapetrice Hi, thanks for your application! Please introduce you and your company to me.
gharchive/issue
2021-08-04T13:30:06
2025-04-01T04:34:15.353385
{ "authors": [ "Broz221", "Prapetrice" ], "repo": "filecoin-project/filecoin-plus-client-onboarding", "url": "https://github.com/filecoin-project/filecoin-plus-client-onboarding/issues/621", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
790720947
Client Allocation Request for: FileDrive Client Allocation Questions Core Information Name: FileDrive Website / Social Media: filedrive.io DataCap Requested: 5TiB Addresses to be Notarized: f3wgfwtrs5p6jrkwfl2mksqa2ivgbgdjjrhjbefy3n7qzvotc3y6sazmp5gfyj7um6jlgdvlbiepzawnc6wxtq Notary Requested: ericfish Hi, @laurarenpanda Our Datacap can be allocated to the region of Asia-GCN. Please modify and reopen this issue if you are in this region. This user’s identity has been verified through filplus.storage
gharchive/issue
2021-01-21T05:49:34
2025-04-01T04:34:15.356803
{ "authors": [ "data-programs", "laurarenpanda", "rayshitou" ], "repo": "filecoin-project/filecoin-plus-client-onboarding", "url": "https://github.com/filecoin-project/filecoin-plus-client-onboarding/issues/81", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2024118334
No List View? What feature or improvement do you think would benefit Files? Just downloaded and installed via the review on PC Mag online and would be a great Windows Explorer replacement except for the fact that there isn't a "List View" option to view files. You seem to have all of the other views available except for List View which, I feel, is the most productive and efficient way to view files within a folder. Not sure I'm going to keep the app because of that. Would be willing to purchase and recommend only when List View is enabled. Thanks. John Schiller. Requirements This proposal will accomplish X This proposal will accomplish Y This proposal will accomplish Z Files Version 3.0.15.0 Windows Version 10.0.19045.3693 Comments No response Thanks for the feedback, Merging with #2509. You can check there or click subscribe in the right sidebar for updates. @CSARJohn thank you for your patience, I'm excited to share that a List View layout will be included in the next release.
gharchive/issue
2023-12-04T15:24:32
2025-04-01T04:34:15.475706
{ "authors": [ "CSARJohn", "Josh65-2201", "yaira2" ], "repo": "files-community/Files", "url": "https://github.com/files-community/Files/issues/14163", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
814078042
external device opening failure Describe the bug A clear and concise description of what the bug is. To Reproduce Steps to reproduce the behavior: Go to '....' Click on '....' Scroll down to '....' See error Expected behavior A clear and concise description of what you expected to happen. Screenshots If applicable, add screenshots to help explain your problem. Desktop (please complete the following information): OS Version: [e.g. Windows 10 20H2 19042.804] App version: [e.g. v1.0] Additional context Add any other context about the problem here. Does this problem occur again after restarting the app? Log file Please upload the log file here so that we can understand your issue better. You can access it from Settings->About->Open log location. You can also find this file by going to %localappdata%\Packages\49306atecsolution.FilesUWP_et10x9a9vyk8t\LocalState, you should see a file in this directory called debug.txt or/and debug_fulltrust.txt. Hello @Sagar-v4 could you add more details? What's the app version? What happens when you try to open the external device?
gharchive/issue
2021-02-23T03:59:41
2025-04-01T04:34:15.480774
{ "authors": [ "Sagar-v4", "gave92" ], "repo": "files-community/Files", "url": "https://github.com/files-community/Files/issues/3687", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1074673712
Having two tabs open and dragging one will close the other window Description If I have two tabs open and move the one I want to go into a new window it just closes the other one and open the new one. Steps To Reproduce No response Expected behavior It should keep both windows open at the same location set Files Version 2.0.41.0 Windows Version 10.0.19043.1387 Relevant Assets/Logs debug.log debug_fulltrust.log This is still an issue in version 2.1.0.0 This issue should be reslolved with #8433
gharchive/issue
2021-12-08T17:57:59
2025-04-01T04:34:15.484025
{ "authors": [ "Josh-65", "ariana2011", "yaichenbaum" ], "repo": "files-community/Files", "url": "https://github.com/files-community/Files/issues/7246", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1157981270
Make default file manager Description Make default file manager option is not showing Steps To Reproduce No response Expected behavior I want the solution Files Version version 2.1.15.0 Windows Version Edition Windows 11 Home Single Language Version 21H2 Installed on ‎28-‎02-‎2022 OS build 22000.527 Experience Windows Feature Experience Pack 1000.22000.527.0 Relevant Assets/Logs Please see the docs.
gharchive/issue
2022-03-03T05:19:12
2025-04-01T04:34:15.487384
{ "authors": [ "tanmoypaul007", "yaichenbaum" ], "repo": "files-community/Files", "url": "https://github.com/files-community/Files/issues/8560", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1810726625
Feature: Open all items in a tag Resolved / Related Issues [x] Were these changes approved in an issue or discussion with the project maintainers? In order to prevent extra work, feature requests and changes to the codebase must be approved before the pull request will be reviewed. This prevents extra work for the contributors and maintainers. Closes #7493 Validation How did you test these changes? [x] Did you build the app and test your changes? [ ] Did you check for accessibility? You can use Accessibility Insights for this. [ ] Did you remove any strings from the en-us resource file? [ ] Did you search the solution to see if the string is still being used? [ ] Did you implement any design changes to an existing feature? [ ] Was this change approved? [x] Are there any other steps that were used to validate these changes? Open app and navigate to Home Enable the Tags widget Right click on the title bar Click on Open all Screenshots (optional) @ferrariofilippo I pushed a small design tweak so that clicking the tag name does the search for more items. This will allow us to repurpose the "view more" button to "open all". What about using our OpacityIcons? The Glyph looks strange @ferrariofilippo I don't think the opacity icon is a good fit but to be honest, there isn't an intuitive icon for "open all". An alternative would be to use the more icon and display the menu item to "open all" when clicking the icon. Good call, we won't need to change the code if we will need any extra actions Should we remove the button border? Can you set it to transparent? Does this also work for the sidebar? Not yet, I'll work on it
gharchive/pull-request
2023-07-18T21:11:55
2025-04-01T04:34:15.495153
{ "authors": [ "ferrariofilippo", "yaira2" ], "repo": "files-community/Files", "url": "https://github.com/files-community/Files/pull/12972", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2406698051
Code Quality: Fixed an issue where checking if running as admin called a wrong API Summary Now that the application checks if it is running as Administrator correctly. Why drag and drop doesn’t work? This is because of UIPI (User Interface Privilege Isolation) using MIC (Mandatory Integrity Control), which blocks a process that has lower IL (integrity level) from interacting with a process that has higher IL. This was introduced in Vista with UAC introduction. It is not caused by UIPI, btw, that UWP cannot access system resource and user data without user’s consent (it’s by AppContainer itself). IL get higher from top to bottom AppContainer/LowBox (UWP) Untrusted (Chrome, IE) Low Medium (default) Medium Plus High/Elevated (elevated) System (system services) Protected Process Installer (Windows) UAC settings Behavior Level Modificationby a process Modificationby user On what session Remarks Highest Yes Yes Secure Desktop (dimmed) Default in Vista 2nd Yes No Secure Desktop (dimmed) Default in 7 onwards 3rd Yes No Normal Desktop (not dimmed) Not recommended Lowest No No N/A Not recommended By process means: Installation of a software manifest of which calls for elevation, and invocation via RunAs verb By user means: modification via Windows Settings app and certain Control Panel applets Registry values to be modified Level ConsentPromptBehaviorAdmin ConsentPromptBehaviorUser EnableLUA PromptOnSecureDesktop Highest 2 (AAC UAC) 3 (OTS UAC) 1 1 2nd 5 3 1 1 3rd 5 3 1 0 Lowest 0 3 0 0 AAC (Admin Approval Consent) UAC displays only Yes or No since logon user is in Administrators group. OTS (Over The Shoulder) UAC displays text boxes to enter credential of an admin identity. Disabling UAC When you disabled UAC through UAC Settings dialog (UIPI will be disabled as well), the UAC prompt won’t be shown. However, it seems that DataExchangeHost.exe doesn’t accept dropping onto a window of a process that has higher IL (Windows OS Bug?) even though UAC is disabled. A contributor of Windows Terminal made a workaround for it. Also, I should denote that we might be able to use DragQueryFile to workaround UIPI, while I’m still not sure. How UAC works (if it helps)? On startup, the kernel executes explorer.exe with a token below: Standard User (not in Administrators group) Only restricted token will be generated after logon. ShellExecute calls appinfo.exe (AIS) and then it calls consent.exe (UAC prompt) via its hosted inner service. When a valid credential of an administrator entered, AIS calls CreateProcessAsUser with that administrative identity; otherwise, it returns ERROR_ACCESS_DENIED. The token generated by this routine is called filtered token (not privileged token) Administrator User (in Administrators group) Both privileged and restricted token will be generated. When Yes clicked, AIS calls CreateProcessAsUser with the logon user ‘s privileged token; otherwise, it returns ERROR_ACCESS_DENIED. BUILTIN/Administrator (the real administrator) Only privileged token will be generated AIS doesn’t call consent.exe. But this behavior can be altered from group policy. I've been described so far, for when user hasn't changed anything through Windows Registry or Group Policy. Some behavior can be changed otherwise. Can you cache the result? For #13394 Yeah do you think would it be better in WindowContext? I would keep the code in the service, but we can cache it in the context. @yaira2 I found that there's a function ChangeWindowMessageFilterEx to change the filtering level. In theory this workarounds UIPI behavior, but may expose some security concerns such as a process that has lower IL can run arbitrary code by injection code to DLL. This is maybe why Windows Terminal didn't do this (or maybe they didn't know this). I would prefer to do the same as Terminal. FYI @ahmed605 IL get higher from top to bottom this order is kinda wrong, AC has higher IL than Untrusted, and it's actually Low IL but with lower TrustLevel than usual Low IL apps, and there's also LPAC (Less Privileged AppContainer) which is almost on the same level as Untrusted but a little bit higher, it's used by Chromium, Firefox, and Microsoft Edge Legacy I was really uncertain in that point. I'm even not sure this is IL. But a docs i referred (not official) wrote as AppContainer < Untrusted. Thank you for the correction. What about that function above to workaround this blocking? What about that function above to workaround this blocking? I'm a bit worried about the security concerns, you can still inject through only the HWNDs btw I see, thank you for letting me know it. I'll just check token type and disable drag and drop accordingly. Looks like SettingsButton is inaccessible. Is this ready for review? I'll cache this in the context class. FULLY TESTED. 100% WORKING. Do we need to adjust the text in the "running as admin" prompt? Would we like to remove the commented out code lines? Ready again. All good. @0x5bfa I rebased the branch from main, can you confirm that everything is in order? Yes, except that test failing
gharchive/pull-request
2024-07-13T04:39:20
2025-04-01T04:34:15.518164
{ "authors": [ "0x5bfa", "ahmed605", "yaira2" ], "repo": "files-community/Files", "url": "https://github.com/files-community/Files/pull/15795", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
638591376
more contact with microsoft company Well this team has already improved the explorer a great deal...However, to make this project more perfect, I think your team should make more contacts with microsoft. Maybe they can help you to perfect your design or at least sponsor your team to some extent. Your app is very good, and the idea behind your app should be integrated to the new windows 10. It will make it to be a better system. In short, I don't want this project to become to be something like trans...TB. I DO NOT WANT TO PAY MONEY FOR A PRO VERSION. You should consider write a letter to the powertoy team. I don't think Microsoft would replace File Explorer with Files UWP or any other file manager. The real reason why Microsoft hasn't updated File Explorer for years is that Explorer is not just a file manager. It's deeply tied with the operating system and replacing it means Microsoft should make a new shell for Windows 10 as Explorer currently serves as the shell. They can also hide the exisiting File Explorer and make Files UWP the default launcher for folders. Windows 10X will be getting a new Shell, and the new Files app will be important for those devices (even if the old file explorer is still there for the Win32 app container and common shell dialogs) @mdtauk We don't really know whether Windows 10 will use the new Shell that's coming in Windows 10X. Also, Files UWP doesn't currently work in Windows 10X. i think there's some misunderstanding here... there's a hidden uwp file explorer inside win10, however it sucks...You should really cooperate with microsoft and i believe they might accept your idea in the next version of win10. you can tell them: we are not using it to replace file explorer, we are providing a new tool for customers to manage their files! why we need to be tied with explorer? although win has search in file explorer, customers wants easier search, So they developed powertoys run. you can pursuade microsoft to sponsor this program!! I think this is very unlikely to happen. Microsoft cannot rely on some third party file-manager. Microsoft will not accept the risk of us breaking something in files. Why sponsoring a project, if you have to build the same? I think the best we can hope for is, that with Windows 10X and the (maybe) new modern file explorer, the file-apis improve and you maybe can choose a default one. Hi, while Microsoft will not use our app since it's third party, we shared the concept with them a couple months ago and they are helping us in different areas, as time goes on we hope Files will be able to replace File Explorer for most users but at this point the app still needs a lot of work to get there. happy to see this.(≧∇≦)/
gharchive/issue
2020-06-15T06:47:58
2025-04-01T04:34:15.523729
{ "authors": [ "Jaiganeshkumaran", "dentistfrankchen", "lampenlampen", "mdtauk", "yaichenbaum" ], "repo": "files-community/files-uwp", "url": "https://github.com/files-community/files-uwp/issues/1074", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2533315312
If possible, could the official team consider making it compatible with ComfyUI? The FLUX model is quite unfriendly to Mac users, but fortunately, there's MFLUX, which is the only successful attempt I've had. If this project could be integrated into ComfyUI, it would be much more convenient than using the terminal. It's possible that diffusionkit is also working on something similar. I believe this could be the springtime for Mac users. I’m starting to love this project (and also DiffusionKit), considering how many things are coming out for mlx. I’m beginning to think about something similar to ComfyUI, which allows heterogeneous and complex workflows, helping you understand what’s happening without diving too much into programming, but entirely designed for mflux. The way mflux and DiffusionKit work is fantastic. ComfyUI might become a bottleneck since it’s primarily developed for CUDA. If I had the skills, I would have already started working on it. :( I’m sorry I can’t be of much help since I’m just a coding novice myself. Ever since I started using ComfyUI, I haven’t opened WebUI again. I believe ComfyUI has great potential due to its high customizability. For example, with the MFlux project, if it could run as a new plugin within ComfyUI, I could use it in ways like this: leveraging the existing plugins in ComfyUI, it might be possible to integrate MFlux into workflows for reverse-engineering prompts, or into workflows for background removal... There are many other similar possibilities worth exploring for users, which is why I’m amazed by ComfyUI’s potential. However, I now see that participants in the MFlux project are researching WebUI-based content. If that can be realized, that’s also good, although I believe its extensibility is far less than ComfyUI. But we are just users, not code contributors, and we always tend to take things for granted, not realizing how fortunate we are to have come across this project. Thanks to MFlUX @raysers @azrahello These are really kind words, thank you very much :) Speaking for myself, I have not personally used ComfyUI yet but it does look very powerful and there seems to be a big community around it (sharing workflows etc). Currently, I have no idea how much of an effort it would be to integrate MFLUX as a backend (if it is even possible etc.) and what kind of features would be needed to be supported by us to be properly integrated, but it is definitely an interesting avenue to explore. If similar kinds of integrations to ComfyUI have been made before, it would be great to know and read up on them. At the moment, there are still things prioritised on the backend/core part of the system (most notably LoRA fine-tuning support and some other features) which will be the focus near term from my part as well as overseeing incoming additions and contributions. However, once the backend feels more fully fleshed out and stable, then a natural next step would be to consider integrations with things like ComfyUI or similar. (Of course, if there is a good PR proposal with a Comfy UI integration, it will be considered). It is however very valuable to get your opinions here about supporting a more "basic" webUI vs ComfyUI to get some sense of what to prioritise longer term since there are many different possible routes to take. I think that @anthonywu has made some good points about the role of mflux (the core) vs different possible frontends. Even though mflux might not host a whole family of different frontends itself, it is probably a good idea to also prioritise features that will enable future GUI support, such as stepwise image generation https://github.com/filipstrand/mflux/pull/59 etc. Thanks to 大佬 @filipstrand for clearing up the confusion. I see that you all are still working hard to continuously improve Mflux, and I sincerely admire and extend my deep gratitude. Of course, you should first focus on stabilizing the backend according to your current work plan, but 大佬’s comments on the frontend UI also make me excited. Thanks again! PS: In Chinese, “大佬” refers to a master or expert in a certain field. I could only think of this word to express my respect—apologies for any confusion. PS: In Chinese, “大佬” refers to a master or expert in a certain field. I could only think of this word to express my respect—apologies for any confusion. Wholesome GitHub Hello everyone, I have started working on the porting process, aiming to implement a simplified version of Mflux in ComfyUI. Currently, I have completed the image generation workflow using a 4-bit quantized model in ComfyUI. The next step is to debug the integration of LoRA and ControlNet, which might be beyond my expertise, but I will do my best. If Mflux is ported as a ComfyUI plugin, it will need to follow the plugin installation rules of ComfyUI, which may require releasing a new project named "Mflux ComfyUI" — this is coming soon. Hello everyone, I have started working on the porting process, aiming to implement a simplified version of Mflux in ComfyUI. Currently, I have completed the image generation workflow using a 4-bit quantized model in ComfyUI. The next step is to debug the integration of LoRA and ControlNet, which might be beyond my expertise, but I will do my best. If Mflux is ported as a ComfyUI plugin, it will need to follow the plugin installation rules of ComfyUI, which may require releasing a new project named "Mflux ComfyUI" — this is coming soon. Hi, this has started, and everyone can try it out via https://github.com/raysers/Mflux-ComfyUI However, it currently only has the basic text2img functionality. Although there are LoRA parameters in the nodes, they are not yet usable. LoRA is an urgent issue that I can't solve on my own. I hope someone experienced can offer guidance or even collaborate on the development. Has anyone tried it? I might have overestimated the demand for using mflux on ComfyUI. So far, only 20 people have tried to clone it. But I’m still updating it. I’ve now added ControlNet and updated some workflows. For those who want to try it, you can git clone, and for those who have already installed it, you can git pull. ControlNet is something mflux already had, I’ve just implemented its original features on ComfyUI. As for the early exploration of those workflows, that's why I’m persistently trying to port mflux to ComfyUI—I’ve always believed that mflux can only fully realize its potential through ComfyUI. Greetings to everyone again! @raysers I’ve tried it, and I found it very intelligent as a development. The only thing I would make a PR for is to have the models quantized to 8-bit (this first and foremost, and perhaps, as an option, the ability to choose whether to use mflux at 16 or 32-bit). Other than that, it’s magnificent—let’s hope it grows and expands (though I understand that might not depend on you). Inpaint support is missing, which is also missing in mflux, although mflux has DiffusionKit, which doesn’t support LoRa, while mflux does. In my opinion, it’s a bit of a mess :P Thank you @azrahello for letting me know that I'm not working alone. Compared to the ComfyUI version of DiffusionKit, my preview issue is caused by a difference in the way it's called. His code goes deeper into the core logic, using latent decoding directly. On the other hand, I’m calling the image generated by MFLUX's FLUX1 and converting it back into a tensor to achieve the preview. As a beginner, I can only work with these ready-made image generation integration files in MFLUX. I’d say I’m not professional enough yet. Honestly, I really hope that once the official team has some free time after completing their current workload, they could consider working on this porting project. When that happens, my temporary version can step aside. As @filipstrand 大佬 mentioned, "it would be great to know and read up on them." If a reference example is needed, I would still recommend DiffusionKit's ComfyUI version, which can be found at https://github.com/thoddnn/ComfyUI-MLX. This port is highly successful, with clear steps from model loading to latent input and VAE decoding, creating foundational nodes that are easy to follow. It’s like the difference between a notebook and a custom-built PC—where with the latter, we can freely choose the parts we like. This might also align with the core philosophy of COMFYUI. Hello everyone, Lora is coming now. Hey, @azrahello, you were right. In your last reply, you mentioned giving users more options, allowing them to freely choose between the full version (dev and schnell), or 4-bit or 8-bit quantization. So, I restructured the model loading mechanism to align with the official implementation. Once I finished that, the Lora issue was resolved as well. You can refer to the discussion in #47 for more details. I think I might have taken a detour earlier. Due to the low performance of my machine, I was stuck in a mindset of only using 4-bit quantized models to balance quality and performance. But now I've realized that the official design offers more flexibility. Users with high-performance machines might prefer the full version or at least 8-bit quantization. Anyway, Lora is here now. This is a significant feature implemented in ComfyUI using mflux. I believe we're not far from achieving the full version’s functionality. The only thing missing is the model-saving part. Additionally, the official team is still rolling out new features, and I’ll try to keep up within ComfyUI. Cheers! I have completed the implementation of the MFLUX model saving feature in COMFYUI, and everyone can go and try it out. I also look forward to the next update of MFLUX, such as text-to-image and JSON calling features. This is just a minor update; this is a feature that has been available in the official version for some time. However, with this completion, COMFYUI can now basically cover the main functionalities of 0.3.0. Thank you, @filipstrand 大佬. I feel that I've made a lot of improvement during this porting process, even though my work was just a very basic port and not very in-depth. I am very grateful to the Mflux project for allowing me to achieve this improvement, and I look forward to the next update of Mflux. Best wishes! Congratulations on the release of mflux 0.4.1! The img2img feature is fantastic, and I'm still exploring how to use it. I've also updated ComfyUI to include the img2img feature from version 0.4.1. However, the requirements.txt hasn't been updated to 0.4.1 yet, so if you want to try the new features in ComfyUI, you'll need to manually upgrade the mflux dependency to 0.4.1 in its Python dependencies. Best regards.
gharchive/issue
2024-09-18T10:13:10
2025-04-01T04:34:15.553894
{ "authors": [ "azrahello", "filipstrand", "n0kovo", "raysers" ], "repo": "filipstrand/mflux", "url": "https://github.com/filipstrand/mflux/issues/56", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
271910639
Bug/issue166 This PR fixes #166. Two things has changed We now pick the runtime assembly according to the target platform Prefer resolved runtime dependency over inherited dependency (inherited assembly name). In order to actually make sure that we can create a connection to a SQL Server according to issue #166, I have created a database up on Azure. The connection is made with a read only user that is only allowed to connect/login. Note: The RuntimeDependencyResolver needs a little cleanup, but we are going to make some changes there anyway to accommodate for script packages Are we good to merge?, @filipw Would like to get this into the script packages branch before I move on there :) thanks!
gharchive/pull-request
2017-11-07T17:07:06
2025-04-01T04:34:15.557278
{ "authors": [ "filipw", "seesharper" ], "repo": "filipw/dotnet-script", "url": "https://github.com/filipw/dotnet-script/pull/169", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
818036403
Update sbt-scalafix to 0.9.26 Updates ch.epfl.scala:sbt-scalafix from 0.9.20 to 0.9.26. GitHub Release Notes - Version Diff I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! Ignore future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "ch.epfl.scala", artifactId = "sbt-scalafix" } ] labels: sbt-plugin-update, semver-patch Superseded by #161.
gharchive/pull-request
2021-02-27T21:40:46
2025-04-01T04:34:15.562136
{ "authors": [ "scala-steward" ], "repo": "finagle/finagle-mysql-shapes", "url": "https://github.com/finagle/finagle-mysql-shapes/pull/157", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1323624878
[BUG] Script descriptions getting scrambled The script definitions of the last set of repo submissions are scrambled at https://www.finalelua.com/scripts. Obviously something to do with the extraction of ScriptGroupName, but many of the most recent dozen or so suffer this scrambling. See attached NOTE: Just realised maybe it's because the [[ ... ]] header descriptions have no line breaks? I see the issue, though I don't have any more time this weekend to work on it. The metadata parser assumes that multiline strings actually span multiple lines. So here when you're defining the multiline string on a single line it doesn't see the ending so it keeps on going until there's another line that starts with ]]. https://github.com/finale-lua/lua-scripts/blob/73f327b1c2e8c4f609403da21dd1a088c4f123ed/src/cross_staff_offset.lua#L10-L27 This is a bug in the parser that should exist, but I also don't know when I'll next have time to fix it (I probably spent too much time on Lua stuff this weekend). Yes @Nick-Mazuk - you've had nose heavy on the grindstone. I'll re-submit what I can with real multiline statements. Git is amazing until that 1% of the time when it's a nightmare. Good luck, y'all. I'll re-submit what I can with real multiline statements. If we can keep this issue open even after these PRs that would be great. That way we can keep track of it here so when I get another sprint to work on Lua stuff I know to fix the underlying issue.
gharchive/issue
2022-08-01T00:18:40
2025-04-01T04:34:15.572305
{ "authors": [ "Nick-Mazuk", "cv-on-hub", "rpatters1" ], "repo": "finale-lua/lua-scripts", "url": "https://github.com/finale-lua/lua-scripts/issues/298", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
109331655
an exceeded limit working with metadata api Hi andrew, first of all thanks a lot for your posts and help, my issue is that i keep getting an exception when i try to get a list of objects the exception is: Web service callout failed: WebService returned a SOAP Fault: EXCEEDED_ID_LIMIT: record limit reached. cannot submit more than 10 records in this operation I have tried getting the objects in a zip file like you have shown in one of your articles, but the issue is i need to access informations of objects without having to parse the files in the zip . i would appreciate it if you could help or guide me to a potential solution or workaround. Thank you If you simply want to list objects, you can use native Apex API's for this and avoid the callout this wrapper is making. Take a look for https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_dynamic_global_describe.htm. Does this help? Let me know if you need more help on this?
gharchive/issue
2015-10-01T16:04:12
2025-04-01T04:34:15.575365
{ "authors": [ "afawcett", "maryembourhi" ], "repo": "financialforcedev/apex-mdapi", "url": "https://github.com/financialforcedev/apex-mdapi/issues/105", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1824856148
FPJSAgent hasn't loaded yet. Make sure to call the init() method first. We are definitely calling the init() method first before calling getVisitorData but maybe it has more to do with NextJS SSR and hydration? We aren't using the react sdk because we need access to the client outside of the react rendering lifcycle for networking to add some headers to requests. Hi @kdawgwilk, I can think of two possible issues: Make sure you call init with await: await fpjsClient.init(). It's an async function that loads the latest fingerprinting logic from our CDN, you must wait for the network request to finish. Make sure you are initializing the client and fingerprinting only the client, not on the server or at build time, you can find more information and examples in our documentation: https://dev.fingerprint.com/docs/usage-with-server-side-rendering-frameworks If none of these resolve the issue, please post an expanded example indicating where in the Next.js lifecycle you are using the Fingerprint agent, thank you! Closing as answered.
gharchive/issue
2023-07-27T17:26:12
2025-04-01T04:34:15.590532
{ "authors": [ "JuroUhlar", "kdawgwilk" ], "repo": "fingerprintjs/fingerprintjs-pro-spa", "url": "https://github.com/fingerprintjs/fingerprintjs-pro-spa/issues/54", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }