repo stringclasses 15
values | fix_commit stringlengths 40 40 | buggy_commit stringlengths 40 40 | message stringlengths 3 64.3k | files listlengths 1 300 | timestamp timestamp[s]date 2013-03-13 20:45:00 2026-04-11 07:48:46 |
|---|---|---|---|---|---|
huggingface/transformers | 152f5b68feb4ddda2938a144d8514b44277037c5 | 69f003696b55de75b7f18888c03111909a7cd537 | Fix TimesFM patch normalization instability (#42099)
* Fix TimesFM potential numerical instability in masked mean/std calculation.
* Fix sigma clamping to 1 instead of config.tolerance in TimesFM. | [
{
"path": "src/transformers/models/timesfm/modeling_timesfm.py",
"patch": "@@ -341,11 +341,7 @@ def _forward_transform(\n ) -> tuple[torch.Tensor, tuple[torch.Tensor, torch.Tensor]]:\n \"\"\"Input is of shape [B, N, P].\"\"\"\n mu, sigma = self._timesfm_masked_mean_std(inputs, patched_pa... | 2025-11-25T08:56:12 |
ggml-org/llama.cpp | a0f3897d53e0e956982ca23abb0d381fe71722f8 | e15cd06a94fce1fafe68f44db01ca69963623df4 | vulkan: fix top_k bug when there are ties in the input (#17659)
* vulkan: Reduce temporary memory usage for TOP_K
- Compute row size for the temp buffer based on the output of the first pass.
- Update shader addressing math to use the output row size
- Pass the output row size as "ncols_output", what used to be "ncol... | [
{
"path": "ggml/src/ggml-vulkan/ggml-vulkan.cpp",
"patch": "@@ -4013,7 +4013,7 @@ static void ggml_vk_load_shaders(vk_device& device) {\n uint32_t nary_shmem = 2 * sizeof(int) * BLOCK_SIZE +\n sizeof(int) * device->subgroup_size +\n ... | 2025-12-05T21:03:19 |
ollama/ollama | 3b4bab3dc55c615a14b1ae74ea64815d3891b5b0 | cbd6e3b38e4491539e4f5ffdd5b6d49c6eadd039 | Fix embeddings load model behavior (#2848) | [
{
"path": "api/types.go",
"patch": "@@ -121,7 +121,6 @@ type Runner struct {\n \tVocabOnly bool `json:\"vocab_only,omitempty\"`\n \tUseMMap bool `json:\"use_mmap,omitempty\"`\n \tUseMLock bool `json:\"use_mlock,omitempty\"`\n-\tEmbeddingOnly bool `json:\"embedd... | 2024-03-01T01:40:56 |
denoland/deno | 610143913437ee07c935821c365e4feaff875228 | 64a1d86cb06dcdba20b8b95b75b7fd00b0a40649 | fix(ci): fix ordering of platforms in ecosystem_compat_slack (#32393)
Results for macOS and linux were swapped | [
{
"path": "tools/ecosystem_compat_slack.ts",
"patch": "@@ -135,7 +135,7 @@ function createMessage(ecosystemReports: Record<string, EcosystemReport>) {\n },\n ];\n \n- for (const os of [\"darwin\", \"linux\", \"windows\"]) {\n+ for (const os of [\"linux\", \"darwin\", \"windows\"]) {\n ... | 2026-03-02T11:15:05 |
vuejs/vue | 1f84dd1c2488d12ef144d4b548b0e80647f9403c | f38d44e23bc9f3eda950a9436205f42be006abfc | fix: fix empty array edge case in normalizeChildren
fix #6790 | [
{
"path": "src/core/vdom/helpers/normalize-children.js",
"patch": "@@ -49,14 +49,16 @@ function normalizeArrayChildren (children: any, nestedIndex?: string): Array<VNo\n lastIndex = res.length - 1\n last = res[lastIndex]\n // nested\n- if (Array.isArray(c) && c.length > 0) {\n- c = norm... | 2017-10-13T05:09:25 |
huggingface/transformers | 69f003696b55de75b7f18888c03111909a7cd537 | 9bd85b01438997c88d26f82dd3088003bb8ea815 | Tiny doc fix (#42296)
fix link | [
{
"path": "CONTRIBUTING.md",
"patch": "@@ -125,9 +125,9 @@ If you're contributing a **vision-language model** (or any multimodal model that\n All new models should use the modular architecture pattern. Create a `modular_<model_name>.py` file using the modular model converter:\n \n - Use the CLI, [`transform... | 2025-11-24T21:57:46 |
ollama/ollama | fa2f2b356384a6ecd103952915e75a4b6a8c33a5 | cbf4970e0f20b131a9db5a719b0929fbebe9a304 | fix: print usedMemory size right (#2827) | [
{
"path": "gpu/gpu_info_cuda.c",
"patch": "@@ -156,7 +156,7 @@ void cuda_check_vram(cuda_handle_t h, mem_info_t *resp) {\n }\n \n LOG(h.verbose, \"[%d] CUDA totalMem %ld\\n\", i, memInfo.total);\n- LOG(h.verbose, \"[%d] CUDA usedMem %ld\\n\", i, memInfo.free);\n+ LOG(h.verbose, \"[%d] CUDA use... | 2024-02-29T19:11:04 |
ggml-org/llama.cpp | fd57b24c0f2b28f54c1375481f470b8e589909eb | 6ab0d6496074f51644def999f94686d1d939785e | ggml webgpu: unary op suppport, code refactoring, ops support (#17764)
* Squashed commit of the following:
commit b3c6bf4b0450d8d452b934df27a0fb7cb53cd755
Author: Abhijit Ramesh <abhijitramesh2k@gmail.com>
Date: Mon Dec 1 18:29:00 2025 -0800
ggml webgpu: fix xielu parameter passing (#11)
The XIELU operati... | [
{
"path": "docs/ops.md",
"patch": "@@ -12,111 +12,111 @@ Legend:\n - 🟡 Partially supported by this backend\n - ❌ Not supported by this backend\n \n-| Operation | BLAS | CANN | CPU | CUDA | Metal | OpenCL | SYCL | Vulkan | zDNN |\n-|-----------|------|------|------|------|------|------|------|------|------|... | 2025-12-05T20:25:51 |
denoland/deno | 64a1d86cb06dcdba20b8b95b75b7fd00b0a40649 | cc8c488d03e2fe97fb6f90d2c13fc561a221a363 | fix(coverage): correct line and branch counts in coverage reports (#32312)
## Summary
Fixes three bugs in `deno coverage` that caused incorrect LCOV output
(closes #9865):
- **Line counts were inflated** due to summing all covering V8 ranges
instead of using the innermost (most specific) range. V8 coverage ranges
ar... | [
{
"path": "cli/args/flags.rs",
"patch": "@@ -1459,6 +1459,11 @@ static ENV_VARS: &[EnvVar] = &[\n description: \"Enable Node.js compatibility mode - extensionless imports, built-in\\nNode.js modules, CommonJS detection and more.\",\n example: None,\n },\n+ EnvVar {\n+ name: \"DENO_COVERAGE_DIR... | 2026-03-02T11:09:31 |
vuejs/vue | e38d0067521eee85febedc5f3ed3c24b5454c3a9 | b7105ae8c9093e36ec89a470caa3b78bda3ef467 | feat: improve template expression error message
close #6771 | [
{
"path": "src/compiler/error-detector.js",
"patch": "@@ -89,10 +89,14 @@ function checkExpression (exp: string, text: string, errors: Array<string>) {\n if (keywordMatch) {\n errors.push(\n `avoid using JavaScript keyword as property name: ` +\n- `\"${keywordMatch[0]}\" in expressi... | 2017-10-12T15:15:23 |
huggingface/transformers | 8f129d256dfbd414b47f7f4a495a7834d95ffbf8 | 96d1c5d63d378fc64908d96fcf365cb96bfb9f83 | Small tp fix (#42366)
up | [
{
"path": "src/transformers/core_model_loading.py",
"patch": "@@ -718,13 +718,13 @@ def convert_and_load_state_dict_in_model(\n mapping.distributed_operation = tp_layer(\n device_mesh=device_mesh, rank=device_map[\"\"].index, empty_param=empty_param.clone(... | 2025-11-24T19:45:22 |
ggml-org/llama.cpp | 8160b38a5fa8a25490ca33ffdd200cda51405688 | c41bde6fbda169b504a37e590798be6cd058f60f | rpc : fix alloc size logic (#17116)
* rpc : fix alloc size logic
* rpc : bump version | [
{
"path": "ggml/include/ggml-rpc.h",
"patch": "@@ -1,14 +1,13 @@\n #pragma once\n \n-#include \"ggml.h\"\n #include \"ggml-backend.h\"\n \n #ifdef __cplusplus\n extern \"C\" {\n #endif\n \n #define RPC_PROTO_MAJOR_VERSION 3\n-#define RPC_PROTO_MINOR_VERSION 5\n+#define RPC_PROTO_MINOR_VERSION 6\n ... | 2025-12-05T17:39:04 |
denoland/deno | cc8c488d03e2fe97fb6f90d2c13fc561a221a363 | 59bec9b83ebe0241b813c8b93a280e879cc114a5 | fix(node): implement process.umask properly (#32385)
## Summary
- `process.umask()` was a stub that always returned `0o22` — now it
actually gets/sets the process umask via `op_fs_umask`
- Validates input with `parseFileMode` (supports numeric and octal
string masks)
- Masks off bits above `0o777`
- Throws `ERR_WORKER... | [
{
"path": "ext/fs/std_fs.rs",
"patch": "@@ -43,12 +43,23 @@ impl FileSystem for RealFs {\n std::env::set_current_dir(path).map_err(Into::into)\n }\n \n- #[cfg(not(unix))]\n- fn umask(&self, _mask: Option<u32>) -> FsResult<u32> {\n- // TODO implement umask for Windows\n- // see https://github.c... | 2026-03-02T11:00:05 |
vuejs/vue | ea3a70b2d59b9d4ecae2bd2438f89dc16e1e1394 | bb1d888d44ddc39f06de5232883c8e3766a47be0 | chore: fix sponsor link typo | [
{
"path": "README.md",
"patch": "@@ -141,7 +141,7 @@ Funds donated via Patreon goes directly to support Evan You's full-time work on\n <h4 align=\"center\">Gold</h4>\n \n <a href=\"https://opencollective.com/vuejs/goldsponsor/0/website\" target=\"_blank\"><img src=\"https://opencollective.com/vuejs/goldspon... | 2017-10-11T17:16:02 |
ggml-org/llama.cpp | 6016d0bd414d0512a78aebccded6af50bc6d71aa | 1be97831e44a6335aca9c3f4f3edbb0e35bea98f | HIP : fix RDNA4 build (#17792) | [
{
"path": "ggml/src/ggml-cuda/mma.cuh",
"patch": "@@ -560,7 +560,7 @@ namespace ggml_cuda_mma {\n xi[0] = xs[0];\n xi[1] = xs[1];\n #endif // defined(RDNA4)\n- }else if constexpr (I == 16 && J == 8) {\n+ } else if constexpr (I == 16 && J == 8) {\n ... | 2025-12-05T12:47:52 |
huggingface/transformers | 90aef4d55b196ac13134b8b2d4352b989cca2361 | d4dcef7b6285a4de2cc75bac555ab32d0311a1ad | Make benchmarking lighter: clean-up result files and remove non-needed arguments (#42357)
* Duplicate deletion in config check
* More attn implem configs
* Remodel and remove backend
* Change useless message to debug
* Remove extra generation config
* Simplify inter-token latency
* Update src/transformers/gener... | [
{
"path": "benchmark_v2/framework/benchmark_config.py",
"patch": "@@ -2,9 +2,10 @@\n import itertools\n import json\n import logging\n+from functools import lru_cache\n from typing import Any\n \n-from transformers.utils.import_utils import is_flash_attn_2_available\n+from transformers.utils.import_utils im... | 2025-11-24T16:40:55 |
ollama/ollama | 1cde63dd64ee03cb52319c6415c795147bf65099 | a189810df6c4b0492463d1ddb68993c9abc32c7f | Log unexpected server errors checking for update
This should unmask some failure modes that likely
show up in app logs as unmarshal errors | [
{
"path": "app/lifecycle/updater.go",
"patch": "@@ -86,6 +86,11 @@ func IsNewReleaseAvailable(ctx context.Context) (bool, UpdateResponse) {\n \tif err != nil {\n \t\tslog.Warn(fmt.Sprintf(\"failed to read body response: %s\", err))\n \t}\n+\n+\tif resp.StatusCode != 200 {\n+\t\tslog.Info(fmt.Sprintf(\"check... | 2024-02-27T17:17:04 |
denoland/deno | 59bec9b83ebe0241b813c8b93a280e879cc114a5 | 77653251082b5b3d4aea2f62757a4b25155c3a31 | fix(ext/node): make fsPromises.watch() a proper AsyncIterable with close() (#32378)
## Summary
- The watcher returned by `node:fs/promises` `watch()` was a plain
object with only `[Symbol.asyncIterator]`, missing `next()` and
`return()` methods
- The returned object now properly implements the `AsyncIterator`
protocol... | [
{
"path": "ext/node/polyfills/_fs/_fs_watch.ts",
"patch": "@@ -171,26 +171,44 @@ export function watchPromise(\n });\n \n if (options?.signal) {\n- options?.signal.addEventListener(\"abort\", () => watcher.close());\n+ if (options.signal.aborted) {\n+ watcher.close();\n+ } else {\n+ o... | 2026-03-02T10:48:08 |
vuejs/vue | 53431c63a9033bb9a73c915bca8525f0d7988c26 | 421658884f7ca786747abf9b89e00925fdfdfba8 | types: fix flow typing | [
{
"path": "src/core/vdom/vnode.js",
"patch": "@@ -46,7 +46,7 @@ export default class VNode {\n this.ns = undefined\n this.context = context\n this.functionalContext = undefined\n- this.functioanlOptions = undefined\n+ this.functionalOptions = undefined\n this.functionalScopeId = undefi... | 2017-10-11T15:24:39 |
ggml-org/llama.cpp | 1be97831e44a6335aca9c3f4f3edbb0e35bea98f | a6cfc212ed21b1cf6746827390160ba26c160ee9 | fix: prevent segfault in tokenizer on highly repetitive input (#17786)
Add nosubs|optimize flags to std::regex constructors to prevent
catastrophic backtracking when processing prompts with repeated
identical characters (e.g., 'A' * 10000).
The nosubs flag disables subgroup capture, significantly reducing
memory usag... | [
{
"path": "src/unicode.cpp",
"patch": "@@ -499,7 +499,7 @@ static std::vector<size_t> unicode_regex_split_custom_llama3(const std::string &\n \n // use std::wregex to split the text\n static std::vector<size_t> unicode_regex_split_stl(const std::wstring & wtext, const std::wstring & regex_expr, const std::v... | 2025-12-05T11:52:23 |
huggingface/transformers | d4dcef7b6285a4de2cc75bac555ab32d0311a1ad | f221a3b46b4ab5b5efe24f0120446af396f8fc4b | Fixed-wrong-ZeRO3-json-snippet-found-in-deepspeed-markdown-file (#42346)
* Correct syntax error in trainer.md
A comma is missing between two parameters in the signature of compute_loss function.
* Correct syntax error in trainer.md
A comma is missing between two parameters in the signature of compute_loss function.... | [
{
"path": "docs/source/en/deepspeed.md",
"patch": "@@ -341,13 +341,6 @@ The example ZeRO-3 and ZeRO-Infinity config below sets most of the parameter val\n \"buffer_size\": 1e8,\n \"max_in_cpu\": 1e9\n },\n- \"aio\": {\n- \"block_size\": 262144,\n- ... | 2025-11-24T16:21:36 |
ollama/ollama | a189810df6c4b0492463d1ddb68993c9abc32c7f | e95b8967909c490cf0cf608388dbeae96fbe3bcf | Determine max VRAM on macOS using `recommendedMaxWorkingSetSize` (#2354)
* read iogpu.wired_limit_mb on macOS
Fix for https://github.com/ollama/ollama/issues/1826
* improved determination of available vram on macOS
read the recommended maximal vram on macOS via Metal API
* Removed macOS-specific logging
... | [
{
"path": "gpu/gpu_darwin.go",
"patch": "@@ -1,12 +1,14 @@\n //go:build darwin\n \n package gpu\n-\n+/*\n+#cgo CFLAGS: -x objective-c\n+#cgo LDFLAGS: -framework Foundation -framework CoreGraphics -framework Metal\n+#include \"gpu_info_darwin.h\"\n+*/\n import \"C\"\n import (\n \t\"runtime\"\n-\n-\t\"github... | 2024-02-25T23:16:45 |
denoland/deno | 77653251082b5b3d4aea2f62757a4b25155c3a31 | 63ff522af860629fed247062c6157dd776af649b | fix(ext/node): return first created path from recursive "node:fs" mkdir call (#32300)
## Summary
- Fix `fs.mkdir()`, `fs.mkdirSync()`, and `fs/promises.mkdir()` with `{
recursive: true }` to return the first directory path created, matching
Node.js behavior
- When all directories already exist, correctly returns `und... | [
{
"path": "ext/node/polyfills/_fs/_fs_mkdir.ts",
"patch": "@@ -5,12 +5,83 @@\n \n import type { CallbackWithError } from \"ext:deno_node/_fs/_fs_common.ts\";\n import { promisify } from \"ext:deno_node/internal/util.mjs\";\n-import { denoErrorToNodeError } from \"ext:deno_node/internal/errors.ts\";\n+import... | 2026-03-02T10:39:11 |
vuejs/vue | 421658884f7ca786747abf9b89e00925fdfdfba8 | 050bb33f9b02589357c037623ea8cbf8ff13555b | fix: fix scoped CSS for nested nodes in functional components | [
{
"path": "src/core/vdom/create-functional-component.js",
"patch": "@@ -51,7 +51,8 @@ function FunctionalRenderContext (\n this._c = (a, b, c, d) => {\n const vnode: ?VNode = createElement(contextVm, a, b, c, d, needNormalization)\n if (vnode) {\n- vnode.fnScopeId = options._scopeId\n... | 2017-10-11T15:17:46 |
ggml-org/llama.cpp | a6cfc212ed21b1cf6746827390160ba26c160ee9 | 3a0d10533abcd63d7815c481d1ae93c302dc93aa | ci : fix winget workflow (#17790) | [
{
"path": ".github/workflows/winget.yml",
"patch": "@@ -9,7 +9,7 @@ jobs:\n update:\n name: Update Winget Package\n runs-on: ubuntu-latest\n- if: ${{ github.repository.owner.login == 'ggml-org' }}\n+ if: github.repository_owner == 'ggml-org'\n \n steps:\n - name: Install cargo bins... | 2025-12-05T11:44:17 |
huggingface/transformers | f221a3b46b4ab5b5efe24f0120446af396f8fc4b | dc6a53b9c152e5f02f37955fb8b09170bf6f6caa | fix tekken pattern matching (#42363)
* fix tekken pattern matching
* add a test
* up
* up
* style | [
{
"path": "src/transformers/tokenization_utils_base.py",
"patch": "@@ -2110,7 +2110,7 @@ def from_pretrained(\n if \"tokenizer_file\" in vocab_files and not re.search(vocab_files[\"tokenizer_file\"], \"\".join(remote_files)):\n # mistral tokenizer names are different, but we can still co... | 2025-11-24T15:39:10 |
denoland/deno | 63ff522af860629fed247062c6157dd776af649b | d428817aa77df5dfbcb9e62c6f9240c8b2086ce4 | fix(node): preserve AsyncLocalStorage context in stream.finished callback (#32389)
## Summary
- Snapshots the async context when `eos()` (the implementation behind
`stream.finished`) is called and restores it around the callback
invocation
- In Node.js this context propagation happens automatically through the
native ... | [
{
"path": "ext/node/polyfills/internal/streams/end-of-stream.js",
"patch": "@@ -3,6 +3,7 @@\n \n import process from \"node:process\";\n import { primordials } from \"ext:core/mod.js\";\n+import { core } from \"ext:core/mod.js\";\n import imported1 from \"ext:deno_node/internal/errors.ts\";\n import { kEmpt... | 2026-03-02T10:26:23 |
vuejs/vue | 68bdbf508b915872627676d6bf987bdac9e5fe97 | 2d32b5d1b663fa331ec256b73e937af15eb6e3d5 | fix: perperly handle v-if on <template> scoped slot
fix #6725 | [
{
"path": "src/compiler/codegen/index.js",
"patch": "@@ -343,11 +343,14 @@ function genScopedSlot (\n if (el.for && !el.forProcessed) {\n return genForScopedSlot(key, el, state)\n }\n- return `{key:${key},fn:function(${String(el.slotScope)}){` +\n+ const fn = `function(${String(el.slotScope)}){` +... | 2017-10-10T16:21:42 |
huggingface/transformers | dc6a53b9c152e5f02f37955fb8b09170bf6f6caa | 73a9bc3756cb15b86c10202a62c128121315d347 | Fix code examples to load gpt 1 openai community model (#42347)
* Fix code examples to load gpt 1 openai community model
* Remove dtypes redundant declaration | [
{
"path": "docs/source/en/model_doc/openai-gpt.md",
"patch": "@@ -43,7 +43,7 @@ The example below demonstrates how to generate text with [`Pipeline`], [`AutoMod\n import torch\n from transformers import pipeline\n \n-generator = pipeline(task=\"text-generation\", model=\"openai-community/gpt\", dtype=torch.... | 2025-11-24T15:21:37 |
ggml-org/llama.cpp | e95d0bc8fdb4141d98e9224399dcda8cff4b52ce | 668ed765742065f82c2899e101ee4384d6669f11 | CUDA: fix FA VKQ accumulator overflow (#17746) | [
{
"path": "ggml/src/ggml-cuda/fattn-common.cuh",
"patch": "@@ -10,6 +10,12 @@\n #define HALF_MAX_HALF __float2half(65504.0f/2) // Use neg. of this instead of -INFINITY to initialize KQ max vals to avoid NaN upon subtraction.\n #define SOFTMAX_FTZ_THRESHOLD -20.0f // Softmax exp. of... | 2025-12-05T08:18:10 |
denoland/deno | d428817aa77df5dfbcb9e62c6f9240c8b2086ce4 | b4d4a5bc2192e4dcb3ae16319273d9ffa883c7d1 | fix(jupyter): handle shutdown and interrupt requests per protocol (#32359)
## Summary
- Send `shutdown_reply` on the control channel before exiting, with the
`restart` field echoed back from the request
- Handle `interrupt_request` by calling
`v8::IsolateHandle::terminate_execution()` to actually interrupt running
JS... | [
{
"path": "cli/tools/jupyter/mod.rs",
"patch": "@@ -180,12 +180,20 @@ pub async fn kernel(\n };\n let repl_session_proxy_channels = JupyterReplProxy { tx: tx1, rx: rx2 };\n \n+ let isolate_handle = repl_session_proxy\n+ .repl_session\n+ .worker\n+ .js_runtime\n+ .v8_isolate()\n+ .thread_... | 2026-03-02T10:05:00 |
vuejs/vue | dff85b230abda63839ed6b80d56ccfc6068b9ae0 | 70a28b37bc2fc9fe8494d70a13e4f8848aed4d00 | fix(ssr): handle inline template compilation error
fix #6766 | [
{
"path": "src/compiler/to-function.js",
"patch": "@@ -1,7 +1,7 @@\n /* @flow */\n \n-import { noop } from 'shared/util'\n-import { warn, tip } from 'core/util/debug'\n+import { noop, extend } from 'shared/util'\n+import { warn as baseWarn, tip } from 'core/util/debug'\n \n type CompiledFunctionResult = {\n... | 2017-10-10T14:47:06 |
huggingface/transformers | 73a9bc3756cb15b86c10202a62c128121315d347 | 6b4b7bf8bbb739c95a76ea3c616eb6dbd4e848c7 | Replace Optional and Union typing with | in some source files (#42294)
* Replace Optional and Union typing with | in some source files
Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>
* Replace Optional and Union typing with | in some source files
Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>
* Replace Opti... | [
{
"path": "src/transformers/cli/add_new_model_like.py",
"patch": "@@ -19,7 +19,7 @@\n from collections.abc import Callable\n from datetime import date\n from pathlib import Path\n-from typing import Annotated, Any, Optional, Union\n+from typing import Annotated, Any\n \n import typer\n \n@@ -95,7 +95,7 @@ d... | 2025-11-24T15:20:16 |
ggml-org/llama.cpp | 668ed765742065f82c2899e101ee4384d6669f11 | 03d9a77b85dd00efd807c65435bdb51bbb6a77d0 | HIP: enable WMMA-MMQ INT kernels for RDNA 3 (#17576)
* enabled wmma instructions for most quantizations other than q2k
* fixed the last q2_k test case failure
* address comments: fix out of bound write for RDNA4, add comments after #endif
* clean up rebase: fix ne error in half2
* fix the EditorConfig CI | [
{
"path": "ggml/src/ggml-cuda/common.cuh",
"patch": "@@ -226,7 +226,7 @@ static const char * cu_get_error_str(CUresult err) {\n #define AMD_MFMA_AVAILABLE\n #endif // defined(GGML_USE_HIP) && defined(CDNA) && !defined(GGML_HIP_NO_MMQ_MFMA)\n \n-#if defined(GGML_USE_HIP) && defined(RDNA4)\n+#if defined(GGML_... | 2025-12-05T08:17:37 |
ollama/ollama | 1f087c4d26e1ee938203dffbcff134efa5072307 | 5d7ea6616fc127469f43605464803d8521fcc51d | Update langchain python tutorial (#2737)
Remove unused GPT4all
Use nomic-embed-text as embedded model
Fix a deprecation warning (__call__) | [
{
"path": "docs/tutorials/langchainpy.md",
"patch": "@@ -42,12 +42,12 @@ text_splitter=RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n all_splits = text_splitter.split_documents(data)\n ```\n \n-It's split up, but we have to find the relevant splits and then submit those to the model. We c... | 2024-02-25T05:31:36 |
vuejs/vue | 62405aa9035d5f547c0440263f16f21c1325f100 | 405d8e9f4c3201db2ae0e397d9191d9b94edc219 | revert: fix(v-model): fix input listener with modifier blocking v-model update
This reverts commit 6f312d636c3d6049dc9e60007f88ea871b8e8173 because the change
is no longer needed after switching nextTick to use MessageChannel. | [
{
"path": "src/core/vdom/helpers/update-listeners.js",
"patch": "@@ -5,22 +5,18 @@ import { cached, isUndef } from 'shared/util'\n \n const normalizeEvent = cached((name: string): {\n name: string,\n- plain: boolean,\n once: boolean,\n capture: boolean,\n- passive: boolean,\n- handler?: Function\n+... | 2017-10-09T20:30:07 |
denoland/deno | b4d4a5bc2192e4dcb3ae16319273d9ffa883c7d1 | 9c81a2306081a63be4cdbb94b24a2437d0956945 | fix:(ext/node): escape simple quotes in node:child_process (#32336) | [
{
"path": "ext/node/polyfills/internal/child_process.ts",
"patch": "@@ -1284,13 +1284,15 @@ function transformDenoShellCommand(\n return a;\n })\n : result.deno_args.map((a) => {\n- // POSIX: args with shell variable refs use double quotes to\n- // preserve variable expansi... | 2026-03-02T09:34:45 |
huggingface/transformers | 0cc848f8c6a37104acf21946e19a9e2065a290e7 | fd20cdc2e8669b36abfceb8ab59cfc7c957b0469 | Protect `torch.distributed` imports (#42361)
fix | [
{
"path": "src/transformers/core_model_loading.py",
"patch": "@@ -29,16 +29,17 @@\n import torch\n \n from .integrations.accelerate import offload_weight\n-from .integrations.tensor_parallel import ALL_PARALLEL_STYLES, DTensor, Replicate, TensorParallelLayer\n+from .integrations.tensor_parallel import ALL_P... | 2025-11-24T14:50:55 |
ggml-org/llama.cpp | 96fe9badfc5235ff0a049aca647bff8c448055aa | bde188d60f58012ada0725c6dd5ba7c69fe4dd87 | Add support for CUMSUM and TRI for CUDA. (#17584)
* Add support for CUMSUM and TRI for CUDA.
* Minor optimizations.
* Correct warp_prefix_inclusive_sum in float2 variant to return float2
* Optimize TRI
* Whitespace
* Fix strides.
* Implement double loop
* Whitespace
* Fix HIP compilation bugs
* Optimizations ... | [
{
"path": "ggml/src/ggml-cuda/common.cuh",
"patch": "@@ -463,6 +463,53 @@ static __device__ __forceinline__ float warp_reduce_max(float x) {\n return x;\n }\n \n+template<typename T, int width = WARP_SIZE>\n+static __device__ __forceinline__ T warp_prefix_inclusive_sum(T x) {\n+ const int lane_id = t... | 2025-12-04T21:19:51 |
ollama/ollama | 8782dd562819606c6b84f0e075e987f6744e83d2 | 11bfff8ee11ffa6e49ec8fbecf3a20fa060b582f | fix `build_windows.ps1` script to run `go build` with the correct flags | [
{
"path": "scripts/build_windows.ps1",
"patch": "@@ -53,7 +53,7 @@ function buildOllama() {\n write-host \"Building ollama CLI\"\n & go generate ./...\n if ($LASTEXITCODE -ne 0) { exit($LASTEXITCODE)}\n- & go build \"-ldflags=\"\"-X=github.com/jmorganca/ollama/version.Version=$script:VERSION\... | 2024-02-22T22:41:43 |
vuejs/vue | 405d8e9f4c3201db2ae0e397d9191d9b94edc219 | 37533fd71e4fe002c909d6b167873cad5097f6b9 | fix: prevent memory leak due to circular reference in vnodes
fix #6759 | [
{
"path": "src/core/instance/lifecycle.js",
"patch": "@@ -133,6 +133,10 @@ export function lifecycleMixin (Vue: Class<Component>) {\n if (vm.$el) {\n vm.$el.__vue__ = null\n }\n+ // release circular reference (#6759)\n+ if (vm.$vnode) {\n+ vm.$vnode.parent = null\n+ }\n }\n }\n... | 2017-10-09T15:48:19 |
denoland/deno | 9c81a2306081a63be4cdbb94b24a2437d0956945 | 16b2029c7bc723d81115e48a7f362cb680c7659b | fix(ext/console): support iterators in console.table (#32379)
## Summary
- Fix `console.table` to properly display data when given iterator
objects (e.g., `map.entries()`, `map.values()`, `set.values()`)
- Previously iterators rendered as empty tables because `ObjectKeys`
returns nothing useful for iterator objects
- ... | [
{
"path": "ext/web/01_console.js",
"patch": "@@ -3741,8 +3741,12 @@ class Console {\n let resultData;\n const isSetObject = isSet(data);\n const isMapObject = isMap(data);\n+ const isIteratorObject = !isSetObject && !isMapObject &&\n+ !ArrayIsArray(data) && typeof data[SymbolIterator] ==... | 2026-03-02T08:34:03 |
ggml-org/llama.cpp | bde188d60f58012ada0725c6dd5ba7c69fe4dd87 | 9d0229967a0538840368547ee7ddc637fc28142d | metal: TRI, FILL, EXPM1, SOFTPLUS (#16623)
* feat(wip): Port initial TRI impl from pervious work
The kernel does not work and is not optimized, but the
code compiles and runs, so this will be the starting point
now that the core op has been merged.
Branch: ggml-cumsum-tri
Signed-off-by: Gabe Goodhart <ghart@us.ibm.... | [
{
"path": "ggml/src/ggml-metal/ggml-metal-device.cpp",
"patch": "@@ -175,6 +175,7 @@ ggml_metal_pipeline_with_params ggml_metal_library_get_pipeline_unary(ggml_metal\n const char * op_str = \"undefined\";\n switch (op->op) {\n case GGML_OP_SCALE: op_str = \"scale\"; break;\n+ ... | 2025-12-04T17:12:19 |
ollama/ollama | 287ba115004b6bf034f7e7f8d4cd5ef2aab0e5e3 | 63861f58cc6bad7d512825badd5154ccd7b32826 | better error message when calling `/api/generate` or `/api/chat` with embedding models | [
{
"path": "server/routes.go",
"patch": "@@ -192,7 +192,7 @@ func GenerateHandler(c *gin.Context) {\n \t}\n \n \tif model.IsEmbedding() {\n-\t\tc.AbortWithStatusJSON(http.StatusBadRequest, gin.H{\"error\": \"model does not support generate\"})\n+\t\tc.AbortWithStatusJSON(http.StatusBadRequest, gin.H{\"error\... | 2024-02-21T02:53:45 |
huggingface/transformers | fd20cdc2e8669b36abfceb8ab59cfc7c957b0469 | 1f0227396b5276c3d1c4d31bb11c65b43e52c8cb | Fix typos (#42354) | [
{
"path": ".github/workflows/get-pr-info.yml",
"patch": "@@ -40,7 +40,7 @@ on:\n description: \"The sha of the merge commit for the pull request (created by GitHub) in the base repository\"\n value: ${{ jobs.get-pr-info.outputs.PR_MERGE_COMMIT_SHA }}\n PR_MERGE_COMMIT_BASE_SHA:\n- ... | 2025-11-24T14:34:10 |
denoland/deno | 16b2029c7bc723d81115e48a7f362cb680c7659b | 9cf58520ee80dd88777d3a97e59800163697d1d2 | fix(ext/node): handle emoji width correctly in readline (#32383)
## Summary
- Characters with the Unicode `Emoji_Presentation` property (like ⚡
U+26A1) are rendered as width 2 in terminals, but `getStringWidth` was
returning 1 since they aren't classified as East Asian Wide. This caused
cursor positioning issues in `n... | [
{
"path": "ext/node/polyfills/internal/util/inspect.mjs",
"patch": "@@ -286,6 +286,8 @@ const ansiPattern = \"[\\\\u001B\\\\u009B][[\\\\]()#;?]*\" +\n \"|(?:(?:\\\\d{1,4}(?:;\\\\d{0,4})*)?[\\\\dA-PR-TZcf-ntqry=><~]))\";\n const ansi = new SafeRegExp(ansiPattern, \"g\");\n \n+const reEmojiPresentation = ne... | 2026-03-02T08:33:38 |
vuejs/vue | 37533fd71e4fe002c909d6b167873cad5097f6b9 | 96b97448118de0939bf5f77c9b74cf1613a5a107 | refactor: improve errorCaptured propagation behavior | [
{
"path": "src/core/util/error.js",
"patch": "@@ -8,12 +8,15 @@ export function handleError (err: Error, vm: any, info: string) {\n if (vm) {\n let cur = vm\n while ((cur = cur.$parent)) {\n- if (cur.$options.errorCaptured) {\n- try {\n- const propagate = cur.$options.errorCap... | 2017-10-09T13:51:54 |
ollama/ollama | ce0c95d0972476f1e5a0064edbceb33d3ceed6ba | a9bc1e1c37d2e155eaca6de2b64008a36354b5a0 | [fix] /bye and /exit are now treated as prefixes (#2381)
* [fix] /bye and /exit are now treated as prefixes
instead of being treated as entire lines which doesn't align with the way the rest of the commands are treated
* Update cmd/interactive.go
Fixing whitespace
---------
Co-authored-by: Jeffrey Morgan ... | [
{
"path": "cmd/interactive.go",
"patch": "@@ -470,7 +470,7 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {\n \t\t\t} else {\n \t\t\t\tusage()\n \t\t\t}\n-\t\tcase line == \"/exit\", line == \"/bye\":\n+\t\tcase strings.HasPrefix(line, \"/exit\"), strings.HasPrefix(line, \"/bye\"):\n... | 2024-02-20T02:56:49 |
huggingface/transformers | 1f0227396b5276c3d1c4d31bb11c65b43e52c8cb | 2e0457e6074d82e1d36b041aa271bb55025e36a3 | Fix reference to imagenet 1k dataset (#42348) | [
{
"path": "docs/source/en/model_doc/levit.md",
"patch": "@@ -56,7 +56,7 @@ This model was contributed by [anugunj](https://huggingface.co/anugunj). The ori\n one takes the average prediction between both heads as final prediction. (2) is also called \"fine-tuning with distillation\",\n because one relie... | 2025-11-24T14:27:29 |
denoland/deno | 9cf58520ee80dd88777d3a97e59800163697d1d2 | 0d2ef7c3b608350bda07279e845d8cabdedfac9e | fix(node): implement `resolveObjectURL` for `node:buffer` (#32382)
## Summary
- Implement missing `resolveObjectURL` export from `node:buffer`
(#30950)
- Fix `blobFromObjectUrl` to use `new Blob()` instead of
`webidl.createBranded(Blob)` so that returned blobs can call
`arrayBuffer()`/`text()`/`bytes()` without hittin... | [
{
"path": "ext/node/polyfills/buffer.ts",
"patch": "@@ -13,6 +13,7 @@ export {\n isUtf8,\n kMaxLength,\n kStringMaxLength,\n+ resolveObjectURL,\n SlowBuffer,\n transcode,\n } from \"ext:deno_node/internal/buffer.mjs\";",
"additions": 1,
"deletions": 0,
"language": "Unknown"
},
{
... | 2026-03-02T08:12:03 |
ggml-org/llama.cpp | c4c10bfb86569ccb070d0dbe1a621a8f186baa16 | 817d743cc17cf644dab8408eb0f1e6eac89562c1 | server: move msg diffs tracking to HTTP thread (#17740)
* server: move msg diffs tracking to HTTP thread
* wip
* tool call tests ok
* minor : style
* cont : fix
* move states to server_response_reader
* add safe-guard
* fix
* fix 2
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> | [
{
"path": "tools/server/server-context.cpp",
"patch": "@@ -101,8 +101,6 @@ struct server_slot {\n std::string generated_text;\n llama_tokens generated_tokens;\n \n- common_chat_msg chat_msg;\n-\n std::vector<completion_token_output> generated_token_probs;\n \n bool has_next_token = true;... | 2025-12-04T14:46:08 |
vuejs/vue | 2876ed870c5368a1767fbeddf06e94b55ebd6234 | 35e55ecd42d0d5dcb476aca79c91186b8f3dc226 | fix: resolve async component default for native dynamic import
fix #6751 | [
{
"path": "src/core/vdom/helpers/resolve-async-component.js",
"patch": "@@ -6,13 +6,17 @@ import {\n isDef,\n isUndef,\n isTrue,\n- isObject\n+ isObject,\n+ hasSymbol\n } from 'core/util/index'\n \n import { createEmptyVNode } from 'core/vdom/vnode'\n \n-function ensureCtor (comp, base) {\n- if (c... | 2017-10-07T06:43:10 |
huggingface/transformers | 2e0457e6074d82e1d36b041aa271bb55025e36a3 | acae07ab94aa3c247e3c0185de91508893ad7c67 | Fix documentation reference to pytorch max memory allocated (#42350) | [
{
"path": "docs/source/ar/llm_tutorial_optimization.md",
"patch": "@@ -98,7 +98,7 @@ def bytes_to_giga_bytes(bytes):\n return bytes / 1024 / 1024 / 1024\n ```\n \n-دعونا نستدعي [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_allocated.html) لقياس ذروة تخ... | 2025-11-24T14:27:04 |
ggml-org/llama.cpp | bd4ef134763d81e251fd097019578f2df571dfef | 87a2084c45188d54a554c305a397e778759545ed | common : skip model validation when --help is requested (#17755)
This commit skips the model validation check when the user specifies the
--help option.
The motivation for this is that currently and error is thrown before the
--help could be processed. Now skips validation if params.usage is set,
allowing help to dis... | [
{
"path": "common/arg.cpp",
"patch": "@@ -427,7 +427,7 @@ static bool common_params_parse_ex(int argc, char ** argv, common_params_context\n \n // model is required (except for server)\n // TODO @ngxson : maybe show a list of available models in CLI in this case\n- if (params.model.path.empty() &... | 2025-12-04T12:36:50 |
denoland/deno | 9cab294be758e81037b128a05a13c4fd4fd35b30 | 6adddf9f89f1279bff7a7e9e588adbac6302f876 | fix(cli): load multiple env files in the correct order (#32354)
Closes #32350
Previously we reverse the env file orders because we're using `dotenvy`
that as stated by their
[docs](https://docs.rs/dotenvy/latest/dotenvy/fn.from_filename.html):
> Where multiple declarations for the same environment variable exist in
y... | [
{
"path": "cli/util/watch_env_tracker.rs",
"patch": "@@ -241,7 +241,7 @@ pub fn load_env_variables_from_env_files(\n return;\n };\n \n- for env_file_name in env_file_names.iter().rev() {\n+ for env_file_name in env_file_names.iter() {\n match deno_dotenv::from_path(env_file_name) {\n Ok(_)... | 2026-03-01T15:50:03 |
vuejs/vue | 6dac3dbe441302cebb945b675f78f8e7247e2a97 | 514b90b64770cba9f905d2dff59dfa0e064e580c | feat: rename catchError -> errorCaptured | [
{
"path": "src/core/util/error.js",
"patch": "@@ -8,12 +8,12 @@ export function handleError (err: Error, vm: any, info: string) {\n if (vm) {\n let cur = vm\n while ((cur = cur.$parent)) {\n- if (cur.$options.catchError) {\n+ if (cur.$options.errorCaptured) {\n try {\n- ... | 2017-10-06T20:35:27 |
huggingface/transformers | acae07ab94aa3c247e3c0185de91508893ad7c67 | 2f7747c39092fd931c85fe5f9f2da06456455aa4 | Fix reference to yelp dataset (#42349) | [
{
"path": "docs/source/ar/training.md",
"patch": "@@ -12,7 +12,7 @@\n \n قبل أن تتمكن من ضبط نموذج مُدرب مسبقًا، قم بتنزيل مجموعة بيانات وإعدادها للتدريب. أظهر البرنامج التعليمي السابق كيفية معالجة البيانات للتدريب، والآن لديك الفرصة لاختبار تلك المهارات!\n \n-ابدأ بتحميل مجموعة بيانات [Yelp Reviews](https:... | 2025-11-24T14:26:29 |
ggml-org/llama.cpp | 3659aa28e963ef3f782cd27258e97ddef678c776 | 2a73f81f8a810783db5794256e5ba79f298adee7 | convert: use existing local chat_template if mistral-format model has one. (#17749)
* conversion: use existing local chat_template.jinja file if mistral-format model has one.
* fix --mistral-format mistakenly assuming some <=v7 chat template names are file paths and reading them.
* Update convert_hf_to_gguf.py - cha... | [
{
"path": "convert_hf_to_gguf.py",
"patch": "@@ -2341,19 +2341,31 @@ def _set_vocab_mistral(self):\n self.gguf_writer.add_add_bos_token(True)\n self.gguf_writer.add_add_eos_token(False)\n \n- template_dir = Path(__file__).parent / \"models/templates/\"\n+ local_template_file_pa... | 2025-12-04T11:12:45 |
denoland/deno | 2a04a3cdf54a5e8f04e0c8b11de937b6db671d79 | 8e5efb0509958ab5513b59c87cb83a98f1e897e6 | fix(lsp): cross-scope requests (#32366) | [
{
"path": "Cargo.lock",
"patch": "@@ -10066,6 +10066,7 @@ dependencies = [\n \"anyhow\",\n \"console_static_text\",\n \"crossterm\",\n+ \"deno_path_util\",\n \"file_test_runner\",\n \"fluent-uri\",\n \"indexmap 2.9.0\",",
"additions": 1,
"deletions": 0,
"language": "Unknown"
},
{
"... | 2026-02-28T16:46:33 |
vuejs/vue | 514b90b64770cba9f905d2dff59dfa0e064e580c | db138e2254d71f6b96e033acf66ba43ad269841a | fix: add slot v-bind warning (#6736)
close #6677 | [
{
"path": "src/core/instance/render-helpers/render-slot.js",
"patch": "@@ -1,6 +1,6 @@\n /* @flow */\n \n-import { extend, warn } from 'core/util/index'\n+import { extend, warn, isObject } from 'core/util/index'\n \n /**\n * Runtime helper for rendering <slot>\n@@ -15,6 +15,12 @@ export function renderSlot... | 2017-10-06T19:48:00 |
ollama/ollama | fc39a6cd7a5f9a4951babb183826f15eea0351ff | 88622847c6a83508681b8876e2aaca9ca85f83b5 | Fix cuda leaks
This should resolve the problem where we don't fully unload from the GPU
when we go idle. | [
{
"path": "llm/patches/02-shutdown.diff",
"patch": "@@ -1,5 +1,5 @@\n diff --git a/examples/server/server.cpp b/examples/server/server.cpp\n-index 11dd82c3..311495a8 100644\n+index a0b46970..7800c6e7 100644\n --- a/examples/server/server.cpp\n +++ b/examples/server/server.cpp\n @@ -28,6 +28,7 @@\n@@ -10,7 +... | 2024-02-18T23:50:38 |
huggingface/transformers | 2f7747c39092fd931c85fe5f9f2da06456455aa4 | 1ae3e5bb3a401036d2a36c30e7437279d7e24f7c | Fix tied weight for Bart (for BC) (#42355)
* fix
* fix
* break copied from
* fix copied from | [
{
"path": "src/transformers/models/bart/modeling_bart.py",
"patch": "@@ -897,6 +897,21 @@ def __init__(self, config: BartConfig):\n # Initialize weights and apply final processing\n self.post_init()\n \n+ def tie_weights(self, missing_keys: Optional[set[str]] = None, recompute_mapping: bo... | 2025-11-24T14:14:11 |
ggml-org/llama.cpp | a67ef0f47f1afb67b1d8ec05e6d803e2d9b3faa3 | ef75a89fdb39ba33a6896ba314026e1b6826caba | llama : fix sanity checks during quantization (#17721) | [
{
"path": "src/llama-quant.cpp",
"patch": "@@ -726,21 +726,19 @@ static void llama_model_quantize_impl(const std::string & fname_inp, const std::\n // sanity checks for models that have attention layers\n if (qs.n_attention_wv != 0 && !is_clip_model)\n {\n- const auto & n_head_kv_iter = m... | 2025-12-04T08:33:42 |
denoland/deno | 8e5efb0509958ab5513b59c87cb83a98f1e897e6 | b0a58f307e52957ac9fce373d4e9a7b658a28633 | fix(core): store Global<Context> ptr for libuv-compat callbacks (#32361)
## Summary
- Store a `v8::Global<Context>` via `Global::into_raw()` once during
`register_uv_loop` instead of storing a `v8::Local<Context>` pointer at
the start of each event loop tick
- V8 keeps the persistent-handle slot updated across GC cyc... | [
{
"path": "libs/core/runtime/jsrealm.rs",
"patch": "@@ -115,9 +115,9 @@ pub struct ContextState {\n /// `UvLoopInner` and `ContextState` are `!Send` -- all access is on the\n /// event loop thread.\n pub(crate) uv_loop_inner: Cell<Option<*const UvLoopInner>>,\n- /// Raw pointer to the `uv_loop_t` han... | 2026-02-28T09:47:49 |
vuejs/vue | 2503e13de58c7f8286c77c2668118ed30b69d79d | 3c65239ad406f371564c1b5d8303b772e5c5a7d1 | chore: fix sponsor logo width | [
{
"path": "README.md",
"patch": "@@ -124,8 +124,7 @@ Funds donated via Patreon goes directly to support Evan You's full-time work on\n </td>\n <td align=\"center\" valign=\"middle\">\n <a href=\"http://tooltwist.com\" target=\"_blank\">\n- <img width=\"14\n- 0px\" src=\... | 2017-10-05T20:49:57 |
ggml-org/llama.cpp | 424c5794557597c8bbd9ea318570962b9ad00e22 | e9f9483464e6f01d843d7f0293bd9c7bc6b2221c | convert : support latest mistral-common (fix conversion with --mistral-format) (#17712)
* fix convert_hf_to_gguf.py failing with --mistral-format using later mistral-common versions.
* use get_one_valid_tokenizer_file from mistral-common if available and fallback to old logic otherwise.
* use file name instead of fi... | [
{
"path": "gguf-py/gguf/vocab.py",
"patch": "@@ -31,6 +31,14 @@\n else:\n _mistral_common_installed = True\n \n+try:\n+ from mistral_common.tokens.tokenizers.utils import ( # pyright: ignore[reportMissingImports]\n+ get_one_valid_tokenizer_file,\n+ )\n+except ImportError:\n+ # We still w... | 2025-12-03T20:15:04 |
huggingface/transformers | 1ae3e5bb3a401036d2a36c30e7437279d7e24f7c | af6a36a34af2f463ad2be13987d3ddd9c0753887 | Fix typo - indentation in JSON dump example (#42332)
Fix indentation in JSON dump example | [
{
"path": "docs/source/en/tasks/semantic_segmentation.md",
"patch": "@@ -308,7 +308,7 @@ You could also create and use your own dataset if you prefer to train with the [\n # simple example\n id2label = {0: 'cat', 1: 'dog'}\n with open('id2label.json', 'w') as fp:\n- json.dump(id2label, fp... | 2025-11-24T14:07:48 |
denoland/deno | b0a58f307e52957ac9fce373d4e9a7b658a28633 | ad52a9f35f427cb9a930c1a071f74784ad6cf417 | fix: Update libffi and libffi-sys to fix build (#32301)
Bumps libffi and libffi-sys to latest versions.
Fixes: #32281 | [
{
"path": "Cargo.lock",
"patch": "@@ -6254,19 +6254,19 @@ dependencies = [\n \n [[package]]\n name = \"libffi\"\n-version = \"4.1.2\"\n+version = \"5.1.0\"\n source = \"registry+https://github.com/rust-lang/crates.io-index\"\n-checksum = \"b0feebbe0ccd382a2790f78d380540500d7b78ed7a3498b68fcfbc1593749a94\"\n... | 2026-02-28T09:39:54 |
vuejs/vue | b3cd9bc3940eb1e01da7081450929557d9c1651e | e34c6b78bd77d5eff86a83525a71f1e2f90607a4 | feat: add catchError option
also propagate error thrown in renderError() to global handler | [
{
"path": "src/core/instance/render.js",
"patch": "@@ -101,14 +101,21 @@ export function renderMixin (Vue: Class<Component>) {\n try {\n vnode = render.call(vm._renderProxy, vm.$createElement)\n } catch (e) {\n- handleError(e, vm, `render function`)\n+ handleError(e, vm, `render`)\n ... | 2017-10-05T18:59:38 |
ggml-org/llama.cpp | 41c5e02f426e91a98e25dee822d5eecb7d224fbf | 2e1c9cd814227c576da56379d79b15d7dfd199b2 | webui: Fix zero pasteLongTextToFileLen to disable conversion being overridden (#17445)
* webui: Fix zero pasteLongTextToFileLen to disable conversion being overridden
Zero pasteLongTextToFileLen should disable the conversion, but it was
overwritten with 2500.
* Apply suggestions from code review
* Update webui buil... | [
{
"path": "tools/server/webui/src/lib/components/app/chat/ChatForm/ChatForm.svelte",
"patch": "@@ -64,7 +64,10 @@\n \tlet fileInputRef: ChatFormFileInputInvisible | undefined = $state(undefined);\n \tlet isRecording = $state(false);\n \tlet message = $state('');\n-\tlet pasteLongTextToFileLength = $derived(... | 2025-12-03T19:45:17 |
denoland/deno | 81e50edbd793c3e775ac4d1e66ade137bfd7efdf | 54db4e6fe745617eeb45775b106cc45a2c805098 | fix(lsp): tsgo request cancellation (#32356) | [
{
"path": "cli/lsp/tsgo.rs",
"patch": "@@ -2,8 +2,6 @@\n \n use std::collections::BTreeMap;\n use std::collections::HashMap;\n-use std::io::BufRead;\n-use std::io::Write;\n use std::path::Path;\n use std::path::PathBuf;\n use std::process::Child;\n@@ -23,6 +21,7 @@ use deno_core::serde_json;\n use deno_grap... | 2026-02-27T17:55:08 |
ollama/ollama | a497235a55906fb5f132e8689bf298e3c09d79f3 | df6dc4fd96ba485a028bb1a59e63500bb7357247 | Fix view logs menu | [
{
"path": "app/lifecycle/logging_windows.go",
"patch": "@@ -11,7 +11,7 @@ func ShowLogs() {\n \tcmd_path := \"c:\\\\Windows\\\\system32\\\\cmd.exe\"\n \tslog.Debug(fmt.Sprintf(\"viewing logs with start %s\", AppDataDir))\n \tcmd := exec.Command(cmd_path, \"/c\", \"start\", AppDataDir)\n-\tcmd.SysProcAttr = ... | 2024-02-16T23:42:53 |
vuejs/vue | 6d1f4cb89a156bf5f84942b1031354aa93916cb7 | 6e41679a96582da3e0a60bdbf123c33ba0e86b31 | fix: fallback to Promise in non-DOM environments | [
{
"path": "src/core/util/env.js",
"patch": "@@ -103,6 +103,12 @@ export const nextTick = (function () {\n timerFunc = () => {\n port.postMessage(1)\n }\n+ } else if (typeof Promise !== 'undefined' && isNative(Promise)) {\n+ // use microtask in non-DOM environments, e.g. Weex\n+ const p ... | 2017-10-05T05:01:08 |
huggingface/transformers | af6a36a34af2f463ad2be13987d3ddd9c0753887 | 6940b44d8d10345d32fb4593df6748f1c47e700d | [loading] Re-add and improve disk offloading support (#42242)
* unskip tests
* first shot
* offload in safetensors format
* remove hard-coded value
* update error
* typo
* fix
* update test
* fix
* return it
* post rebase
* improve var names
* improve names
* fix finally
* comment
* fix tests
* fix
* ... | [
{
"path": "src/transformers/core_model_loading.py",
"patch": "@@ -28,6 +28,7 @@\n \n import torch\n \n+from .integrations.accelerate import offload_weight\n from .integrations.tensor_parallel import ALL_PARALLEL_STYLES, DTensor, Replicate, TensorParallelLayer\n from .utils import is_torch_greater_or_equal, ... | 2025-11-24T13:53:53 |
ggml-org/llama.cpp | e7c2cf1356c8127140915a5f313e02dff4b07be8 | 1257491047aed0f56b81f532a5a4865add918821 | server: add router multi-model tests (#17704) (#17722)
* llama-server: add router multi-model tests (#17704)
Add 4 test cases for model router:
- test_router_unload_model: explicit model unloading
- test_router_models_max_evicts_lru: LRU eviction with --models-max
- test_router_no_models_autoload: --no-models-autoloa... | [
{
"path": "tools/server/tests/unit/test_basic.py",
"patch": "@@ -65,6 +65,7 @@ def test_server_slots():\n \n def test_load_split_model():\n global server\n+ server.offline = False\n server.model_hf_repo = \"ggml-org/models\"\n server.model_hf_file = \"tinyllamas/split/stories15M-q8_0-00001-of... | 2025-12-03T14:10:37 |
ollama/ollama | df6dc4fd96ba485a028bb1a59e63500bb7357247 | 88622847c6a83508681b8876e2aaca9ca85f83b5 | Fix duplicate menus on update and exit on signals
Also fixes a few fit-and-finish items for better developer experience | [
{
"path": "app/lifecycle/lifecycle.go",
"patch": "@@ -6,6 +6,8 @@ import (\n \t\"log\"\n \t\"log/slog\"\n \t\"os\"\n+\t\"os/signal\"\n+\t\"syscall\"\n \n \t\"github.com/jmorganca/ollama/app/store\"\n \t\"github.com/jmorganca/ollama/app/tray\"\n@@ -23,12 +25,18 @@ func Run() {\n \t}\n \tcallbacks := t.GetCal... | 2024-02-16T23:33:16 |
vuejs/vue | 6e41679a96582da3e0a60bdbf123c33ba0e86b31 | 1780b1f07b9d3910bac5b101cb65b645f67b1df5 | fix: use MessageChannel for nextTick
fix #6566, #6690 | [
{
"path": "src/core/util/env.js",
"patch": "@@ -1,7 +1,6 @@\n /* @flow */\n-/* globals MutationObserver */\n+/* globals MessageChannel */\n \n-import { noop } from 'shared/util'\n import { handleError } from './error'\n \n // can we use __proto__?\n@@ -80,41 +79,29 @@ export const nextTick = (function () {\... | 2017-10-05T04:52:47 |
denoland/deno | 54db4e6fe745617eeb45775b106cc45a2c805098 | 9747a26f59074363a9c45ffab5ab5f272f6baf30 | fix(ext/node): support process.stdout resize events (#32343)
## Summary
- Add Windows support for `Deno.addSignalListener("SIGWINCH", ...)` by
polling console size changes every 250ms using
`GetConsoleScreenBufferInfo`
- Wire `process.stdout.on("resize")` / `process.stderr.on("resize")` to
emit on SIGWINCH, matching ... | [
{
"path": "ext/node/lib.rs",
"patch": "@@ -518,6 +518,7 @@ deno_core::extension!(deno_node,\n \"internal/streams/utils.js\",\n \"internal/test/binding.ts\",\n \"internal/timers.mjs\",\n+ \"internal/tty.js\",\n \"internal/url.ts\",\n \"internal/util.mjs\",\n \"internal/util/colors.... | 2026-02-27T15:41:14 |
huggingface/transformers | 6940b44d8d10345d32fb4593df6748f1c47e700d | 00ab75e65c051effc8f75d03654d6f9ce9658fa4 | Auto convert tekken.json (#42299)
* auto convert tekken.json
* fix conversion
* simplify
* nit
* model info based on the fly fix
* up
* last nit
* fixup
* call it fix mistral regex
* fix behaviour for local or only tok is saved
* style
* rm comment at wrong palce
* fix escaping
* style
* fix backend toke... | [
{
"path": "src/transformers/convert_slow_tokenizer.py",
"patch": "@@ -19,11 +19,13 @@\n \"\"\"\n \n import warnings\n+from functools import lru_cache\n from typing import Optional\n \n from packaging import version\n from tokenizers import AddedToken, Regex, Tokenizer, decoders, normalizers, pre_tokenizers,... | 2025-11-24T12:16:52 |
ggml-org/llama.cpp | 1257491047aed0f56b81f532a5a4865add918821 | 083e18b11c24fff9e306801cd6f226eecbbb225c | server : fix bad fmt, size() is a size_type (#17735)
Signed-off-by: Adrien Gallouët <angt@huggingface.co> | [
{
"path": "tools/server/server-common.cpp",
"patch": "@@ -791,7 +791,7 @@ static void handle_media(\n SRV_INF(\"downloading image from '%s'\\n\", url.c_str());\n auto res = common_remote_get_content(url, params);\n if (200 <= res.first && res.first < 300) {\n- SRV_INF(\"do... | 2025-12-03T13:47:22 |
ollama/ollama | 88622847c6a83508681b8876e2aaca9ca85f83b5 | 9774663013e725142fa64f17fefbf7d34dda54f3 | fix: chat system prompting overrides (#2542) | [
{
"path": "cmd/interactive.go",
"patch": "@@ -354,8 +354,15 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {\n \t\t\t\t\t}\n \n \t\t\t\t\tif args[1] == \"system\" {\n-\t\t\t\t\t\topts.System = sb.String()\n-\t\t\t\t\t\topts.Messages = append(opts.Messages, api.Message{Role: \"system\... | 2024-02-16T19:42:43 |
denoland/deno | 9747a26f59074363a9c45ffab5ab5f272f6baf30 | 3c9cbfa847f3c4547a2ef763b9f0c43ec8c9fdfc | feat: v8 14.6, fix require(esm), libuv compat (#32347) | [
{
"path": "Cargo.lock",
"patch": "@@ -1929,9 +1929,9 @@ dependencies = [\n \n [[package]]\n name = \"deno_core\"\n-version = \"0.386.0\"\n+version = \"0.387.0\"\n source = \"registry+https://github.com/rust-lang/crates.io-index\"\n-checksum = \"f5cdb60d25f7e87f7f3bb5b1fcbd8d059ab908d614c0ad67dafce7f6c1514f0... | 2026-02-27T11:20:34 |
vuejs/vue | 1780b1f07b9d3910bac5b101cb65b645f67b1df5 | aa1356e83de1112660e7a88ff955f49d64bb5b1f | build: fix weex build | [
{
"path": "package.json",
"patch": "@@ -23,7 +23,7 @@\n \"dev:weex:compiler\": \"rollup -w -c build/config.js --environment TARGET:weex-compiler \",\n \"build\": \"node build/build.js\",\n \"build:ssr\": \"npm run build -- web-runtime-cjs,web-server-renderer\",\n- \"build:weex\": \"npm run bu... | 2017-10-04T22:23:53 |
huggingface/transformers | 00ab75e65c051effc8f75d03654d6f9ce9658fa4 | 3410ba9bab09ca6dadc130ab29fabfaba8baa131 | fix(benchmarks): correct sdpa_backend inconsistency and attn_implementation for continuous batching (#42339)
This commit fixes two bugs in BenchmarkConfig reported in issue #42211:
1. **sdpa_backend inconsistency (line 105)**: The warning message states
"sdpa_backend must be None" but the code was setting it to "m... | [
{
"path": "benchmark_v2/framework/benchmark_config.py",
"patch": "@@ -102,7 +102,7 @@ def check_validity(self, skip_validity_check: bool = False) -> None:\n logger.warning(\n \"when continuous batching is enabled, sdpa_backend must be None because of the attention mask, s... | 2025-11-24T11:12:33 |
ggml-org/llama.cpp | 3d94e967a10ac901392c6abc8747aed204a09bdb | 7feb0a1005307cb6733278ea8ec8ce41a3dd739b | metal : fix data race in pipeline library (#17731) | [
{
"path": "ggml/src/ggml-metal/ggml-metal-device.cpp",
"patch": "@@ -50,7 +50,7 @@ void ggml_metal_pipelines_add(ggml_metal_pipelines_t ppls, const char * name, gg\n }\n \n ggml_metal_pipeline_t ggml_metal_pipelines_get(ggml_metal_pipelines_t ppls, const char * name) {\n- if (ppls->data.find(name) == pp... | 2025-12-03T12:03:40 |
denoland/deno | 3c9cbfa847f3c4547a2ef763b9f0c43ec8c9fdfc | f12a8b6803668ede2c910c5f01ead8bdee3d8149 | refactor(ext/node): `assert` get error source using V8 API (#32339) | [
{
"path": "ext/node/lib.rs",
"patch": "@@ -173,6 +173,7 @@ deno_core::extension!(deno_node,\n deps = [ deno_io, deno_fs ],\n parameters = [TInNpmPackageChecker: InNpmPackageChecker, TNpmPackageFolderResolver: NpmPackageFolderResolver, TSys: ExtNodeSys],\n ops = [\n+ ops::assert::op_node_get_error_s... | 2026-02-27T11:11:22 |
vuejs/vue | 8295f716657ffe516f30e84f29ca94f4a0aefabf | dae173d96d15f47de6ce6961354d5c05e4273005 | fix: warn slot-scope when used as a prop | [
{
"path": "src/core/instance/state.js",
"patch": "@@ -18,6 +18,7 @@ import {\n bind,\n noop,\n hasOwn,\n+ hyphenate,\n isReserved,\n handleError,\n nativeWatch,\n@@ -84,9 +85,11 @@ function initProps (vm: Component, propsOptions: Object) {\n const value = validateProp(key, propsOptions, pro... | 2017-10-04T21:31:58 |
huggingface/transformers | 3410ba9bab09ca6dadc130ab29fabfaba8baa131 | f7e964e5686a091e801195eb99b835b7a0f17b9e | Gemma3 hybrid fix (#42287)
* Fix gemma3 on H100
* Partial fixes for Mi325
* First half of A10 fix
* Final A10 fix | [
{
"path": "tests/models/gemma3/test_modeling_gemma3.py",
"patch": "@@ -134,7 +134,6 @@ def test_generation_beyond_sliding_window_tiny_model(self):\n max_new_tokens=1,\n do_sample=False,\n use_cache=True,\n- cache_implementation=\"hybrid\",\n ... | 2025-11-24T10:23:48 |
ollama/ollama | e547378893b8b40c2cc7ad63131cbe34cc25fb89 | fd77dbec4d7903e68e60d40d44b023eb0d33ed21 | disable default debug | [
{
"path": "app/main.go",
"patch": "@@ -4,14 +4,9 @@ package main\n // go build -ldflags=\"-H windowsgui\" .\n \n import (\n-\t\"os\"\n-\n \t\"github.com/jmorganca/ollama/app/lifecycle\"\n )\n \n func main() {\n-\t// TODO - remove as we end the early access phase\n-\tos.Setenv(\"OLLAMA_DEBUG\", \"1\") // nol... | 2024-02-15T20:05:13 |
vuejs/vue | 2431d3d74396b33a2a120a835cfe7a776f06e277 | 2b5c83af6d8b15510424af4877d58c261ea02e16 | chore: fix warning space | [
{
"path": "src/core/instance/proxy.js",
"patch": "@@ -15,10 +15,10 @@ if (process.env.NODE_ENV !== 'production') {\n \n const warnNonPresent = (target, key) => {\n warn(\n- `Property or method \"${key}\" is not defined on the instance but` +\n+ `Property or method \"${key}\" is not defined o... | 2017-10-04T02:25:52 |
ggml-org/llama.cpp | 0a8026e768e65414a8969078f11d975c5811c33e | 5ceed62421b0ba61527cb16b2a25b3bdd07422eb | common : introduce composable PEG parser combinators for chat parsing (#17136)
* common : implement parser combinators to simplify chat parsing
* add virtual destructor to parser_base
* fix memory leak from circular references of rules
* implement gbnf grammar building
* remove unused private variable
* create a ... | [
{
"path": "CODEOWNERS",
"patch": "@@ -10,13 +10,16 @@\n /common/arg.* @ggerganov\n /common/base64.hpp.* @ggerganov\n /common/build-info.* @ggerganov\n+/common/chat-peg-parser.* @aldehir\n /common/common.* @g... | 2025-12-03T10:45:32 |
denoland/deno | f12a8b6803668ede2c910c5f01ead8bdee3d8149 | 2f42a460913eba2210a2d48e833d1e0e55423ed8 | fix(ext/node): provide CJS globals in worker_threads eval mode (#32266)
## Summary
- When using `new Worker(code, { eval: true })` in
`node:worker_threads`, Node.js evaluates the code as CommonJS, making
`require()` available. Deno was wrapping the code in a
`data:text/javascript` URL (ESM), causing `require is not de... | [
{
"path": "ext/node/polyfills/worker_threads.ts",
"patch": "@@ -364,10 +364,22 @@ class NodeWorker extends EventEmitter {\n \n if (options?.eval) {\n const code = typeof specifier === \"string\"\n- ? encodeURIComponent(specifier)\n+ ? specifier\n // deno-lint-ignore prefer-pr... | 2026-02-27T08:20:35 |
huggingface/transformers | f7e964e5686a091e801195eb99b835b7a0f17b9e | 122a6a3a01ac1bbb71c20231b19d0bfaf54abdbe | Fix ChineseCLIPModel.get_text_features (#42351) | [
{
"path": "src/transformers/models/chinese_clip/modeling_chinese_clip.py",
"patch": "@@ -1024,14 +1024,14 @@ def get_text_features(\n ... text_features = model.get_text_features(**inputs)\n >>> text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True)\n ```\"\... | 2025-11-24T09:36:40 |
ollama/ollama | 823a520266ab51442d8f2d8631a0c2676f79dd3d | 66ef308abdccf3e0098715f66253898e9ff12702 | Fix lint error on ignored error for win console | [
{
"path": "cmd/cmd.go",
"patch": "@@ -815,7 +815,7 @@ func NewCLI() *cobra.Command {\n \n \tif runtime.GOOS == \"windows\" {\n \t\t// Enable colorful ANSI escape code in Windows terminal (disabled by default)\n-\t\tconsole.ConsoleFromFile(os.Stdout)\n+\t\tconsole.ConsoleFromFile(os.Stdout) //nolint:errcheck... | 2024-02-14T03:38:52 |
vuejs/vue | 2b5c83af6d8b15510424af4877d58c261ea02e16 | ae347a52259b24507a9c747c80d78a6beaa36de0 | fix: handle errors in errorHandler
close #6714 | [
{
"path": "src/core/util/error.js",
"patch": "@@ -6,16 +6,24 @@ import { inBrowser } from './env'\n \n export function handleError (err: Error, vm: any, info: string) {\n if (config.errorHandler) {\n- config.errorHandler.call(null, err, vm, info)\n- } else {\n- if (process.env.NODE_ENV !== 'product... | 2017-10-03T22:23:43 |
ggml-org/llama.cpp | 5ceed62421b0ba61527cb16b2a25b3bdd07422eb | 7ca5991d2b7238ab04bc3dca9c2a9b92f4548238 | server: fix duplicate HTTP headers in multiple models mode (#17698)
* llama-server: fix duplicate HTTP headers in multiple models mode (#17693)
* llama-server: address review feedback from ngxson
- restrict scope of header after std::move
- simplify header check (remove unordered_set) | [
{
"path": "tools/server/server-models.cpp",
"patch": "@@ -7,6 +7,7 @@\n #include <sheredom/subprocess.h>\n \n #include <functional>\n+#include <algorithm>\n #include <thread>\n #include <mutex>\n #include <condition_variable>\n@@ -889,6 +890,28 @@ struct pipe_t {\n }\n };\n \n+static std::string to_lowe... | 2025-12-03T09:28:43 |
denoland/deno | 2f42a460913eba2210a2d48e833d1e0e55423ed8 | 7545be7ce2977fb13f75ecd81c24aa57597d0ebe | fix(node/vm): support vm.constants.DONT_CONTEXTIFY in createContext (#32337)
Closes https://github.com/denoland/deno/issues/31192
Adds support for creating vanilla (non-contextified) V8 contexts via
vm.createContext(vm.constants.DONT_CONTEXTIFY). This creates a context
without property interceptors, where globalThis ... | [
{
"path": "ext/node/lib.rs",
"patch": "@@ -234,6 +234,7 @@ deno_core::extension!(deno_node,\n ops::v8::op_v8_write_value,\n ops::vm::op_vm_create_script,\n ops::vm::op_vm_create_context,\n+ ops::vm::op_vm_create_context_without_contextify,\n ops::vm::op_vm_script_run_in_context,\n ops... | 2026-02-27T01:53:18 |
huggingface/transformers | 122a6a3a01ac1bbb71c20231b19d0bfaf54abdbe | bdee0889714e9cb3e53d3b1b2a626919479d356c | fix bug when gemma3n model run on multiple device (#42303)
* fix bug when gemma3n model run on multiple device
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
* update modular file
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
---------
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com> | [
{
"path": "src/transformers/models/gemma3n/modeling_gemma3n.py",
"patch": "@@ -2244,6 +2244,7 @@ def forward(\n dummy_vision_token_id = self.embed_vision.vocab_offset + self.embed_vision.vocab_size - 1\n vision_input_ids = torch.where(vision_mask, input_ids, dummy_vision_token_id).to... | 2025-11-24T09:28:15 |
vuejs/vue | ae347a52259b24507a9c747c80d78a6beaa36de0 | 6ad44e13e990951ff152a0fd7042613c5a87f1c0 | fix: ensure nextTick are passed to errorHandler (#6730) | [
{
"path": "src/core/util/env.js",
"patch": "@@ -89,7 +89,7 @@ export const nextTick = (function () {\n /* istanbul ignore if */ // $flow-disable-line\n if (typeof Promise !== 'undefined' && isNative(Promise)) {\n var p = Promise.resolve()\n- var logError = err => { console.error(err) }\n+ var ... | 2017-10-03T22:06:13 |
denoland/deno | 7545be7ce2977fb13f75ecd81c24aa57597d0ebe | c76ab60e5b8068bd5457da5033145792e13a6154 | fix: deflake run_watch_env_file_with_multiline_values (#32346) | [
{
"path": "tests/integration/watcher_tests.rs",
"patch": "@@ -2563,13 +2563,18 @@ console.log(\"---\");\n .arg(\"--watch\")\n .arg(\"--allow-env\")\n .arg(\"--env-file=.env\")\n+ .arg(\"-L\")\n+ .arg(\"debug\")\n .arg(&main_script)\n .env(\"NO_COLOR\", \"1\")\n .piped_output()\... | 2026-02-26T21:54:24 |
Subsets and Splits
Swift Compiler Issues Analysis
Retrieves all training data for the Swift programming language repository, providing basic filtering but offering limited analytical insight beyond identifying relevant code examples.