Datasets:
repo stringclasses 8
values | issue_number int64 13k 155k | issue_title stringlengths 3 139 | issue_body stringlengths 4 68.4k ⌀ | commit_sha stringlengths 40 40 | files listlengths 1 300 |
|---|---|---|---|---|---|
electron/electron | 50,696 | feat: register contextMenus and sidePanel extension permissions | Fixes #50331. This PR adds the missing 'contextMenus' and 'sidePanel' permissions to the Electron extension system, resolving the 'unknown permission' warnings and enabling the corresponding APIs. | 4e17ddd7b8a80a133ceab4d6bffbc23b2196c620 | [
{
"filename": "shell/common/extensions/api/_api_features.json",
"patch": "@@ -24,6 +24,14 @@\n \"action.setBadgeTextColor\": {\n \"channel\": \"stable\"\n },\n+ \"contextMenus\": {\n+ \"dependencies\": [\"permission:contextMenus\"],\n+ \"contexts\": [\"privileged_extension\"]\n+ },\n+ \"sid... |
huggingface/transformers | 45,248 | Fix tf32 issue: set `torch.backends.cudnn.conv.fp32_precision` explicitly. | # What does this PR do?
PR #42428 change the way to enable / disable torch's TF32 using torch new API. It turns out set
> torch.backends.fp32_precision = False
would still have
> torch.backends.cudnn.conv.fp32_precision = "tf32"
> torch.backends.cudnn.rnn.fp32_precision = "tf32"
It's not clear if it's a bug or a design in `torch`, I will talk to people at torch conference next week.
For now, this issue causes ~60 `test_batching_equivalence` failing. Set `torch.backends.cudnn.conv.fp32_precision = "ieee"` explicitly will have no such failing tests (on the commit of the linked PR).
I will merge this PR directly to move fast. If `torch` team says that it's a design instead of a bug, we could move the logic to our `enable_tf32`.
Keep in mind, even with this fix, there are still 37 failing `test_batching_equivalence`, which are caused by other issues introduced after #42428 , which should be fixed in separated PR(s).
Note: this PR bring the `vit` and `clip` CI back to ✅ | e70c3db53455c5b2eb78ee54cbf27d04a2f29fa9 | [
{
"filename": "conftest.py",
"patch": "@@ -153,7 +153,18 @@ def check_output(self, want, got, optionflags):\n # TODO: Considering move this to `enable_tf32`, or report a bug to `torch`.\n import torch\n \n- torch.backends.cudnn.conv.fp32_precision = \"ieee\"\n+ # In order to set `torch.backend... |
vercel/next.js | 92,325 | ci: fix stats action | We recently re-imaged the self-hosted Linux runners and it now hits `ERR_PNPM_EXDEV` when pnpm copies packages between its default store and the temp stats workspace. Keeping both under the same temp root avoids the cross-filesystem copy failure. | 037d60bfe1f8be7f556b1d83df923689518eeb1a | [
{
"filename": ".github/actions/next-stats-action/src/constants.js",
"patch": "@@ -3,7 +3,20 @@ const os = require('os')\n const fs = require('fs')\n \n const benchTitle = 'Page Load Tests'\n-const workDir = fs.mkdtempSync(path.join(os.tmpdir(), 'next-stats'))\n+\n+function getTempRoot() {\n+ const tempRoot... |
ollama/ollama | 15,312 | app: default app home view to new chat instead of launch | null | a91ece0b1d719dde01231e484c748afca08529d2 | [
{
"filename": "app/store/database.go",
"patch": "@@ -82,7 +82,7 @@ func (db *database) init() error {\n \t\twebsearch_enabled BOOLEAN NOT NULL DEFAULT 0,\n \t\tselected_model TEXT NOT NULL DEFAULT '',\n \t\tsidebar_open BOOLEAN NOT NULL DEFAULT 0,\n-\t\tlast_home_view TEXT NOT NULL DEFAULT 'launch',\n+\t\tl... |
nodejs/node | 62,570 | deps: bump Undici 8 and fix tests | <!--
Before submitting a pull request, please read:
- the CONTRIBUTING guide at https://github.com/nodejs/node/blob/HEAD/CONTRIBUTING.md
- the commit message formatting guidelines at
https://github.com/nodejs/node/blob/HEAD/doc/contributing/pull-requests.md#commit-message-guidelines
For code changes:
1. Include tests for any bug fixes or new features.
2. Update documentation if relevant.
3. Ensure that `make -j4 test` (UNIX), or `vcbuild test` (Windows) passes.
If you believe this PR should be highlighted in the Node.js CHANGELOG
please add the `notable-change` label.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
-->
| a976df664f536c326da5fb127035c960dc39f20f | [
{
"filename": "test/common/websocket-server.js",
"patch": "@@ -120,7 +120,7 @@ class WebSocketServer {\n this.customHandleUpgradeHeaders.map((header) => {\n const index = header.indexOf(':');\n return [header.slice(0, index).trim(), header.slice(index + 1).trim()];\n- })\n+ }... |
facebook/react | 36,204 | [Performance] Core reconciler and server streaming optimizations | This PR introduces several optimizations to the React core targeting high-frequency paths in the Reconciler, Concurrent Mode, and Server Streaming.
### Key Changes:
1. **Reconciler:** Optimized view-transition restoration traversal. By pre-calculating eligibility once per commit and utilizing subtree flags for early bailouts, complexity is reduced from O(N) to O(V).
2. **Concurrent Mode:** Implemented O(1) bailout for lane starvation checks. Tracks earliestPendingTime on the root to avoid 31 bitwise iterations on every yield.
3. **Micro-optimizations:** Replaced Array.map with manual for-loops in the commit phase and hook state cloning, eliminating closure allocations in hot paths.
4. **Server Streaming:** Aggressive GC for single-shot iterators. Nullifies buffer chunks immediately after streaming, significantly reducing peak heap pressure for large RSC payloads.
### Verification:
- Verified with packages/react-reconciler/src/__tests__/ReactHooks-test.internal.js
- Verified with packages/react-reconciler/src/__tests__/ReactExpiration-test.js
- Verified with packages/react-reconciler/src/__tests__/ViewTransitionReactServer-test.js
- Verified with packages/react-server/src/__tests__/ReactFlightServer-test.js | 7ce159d11b052461ad602a1644c60873c6197127 | [
{
"filename": "packages/react-reconciler/src/ReactFiberCommitWork.js",
"patch": "@@ -3503,12 +3503,17 @@ export function commitPassiveMountEffects(\n ): void {\n resetComponentEffectTimers();\n \n+ const isViewTransitionEligible =\n+ enableViewTransition &&\n+ includesOnlyViewTransitionEligibleLane... |
golang/go | 78,436 | encoding/hex: speed up Decode | This CL eliminates the remaining bounds check for index expressions on
Encode's src parameter within the function's loop.
Here are some benchmark results (no change to allocations):
goos: darwin
goarch: arm64
pkg: encoding/hex
cpu: Apple M4
│ old │ new │
│ sec/op │ sec/op vs base │
Decode/256-10 166.2n ± 0% 142.9n ± 0% -14.02% (n=180)
Decode/1024-10 626.9n ± 0% 532.7n ± 0% -15.03% (n=180)
Decode/4096-10 2.472µ ± 0% 2.079µ ± 0% -15.90% (n=180)
Decode/16384-10 9.843µ ± 0% 8.266µ ± 0% -16.02% (n=180)
geomean 1.262µ 1.069µ -15.25%
│ old │ new │
│ B/s │ B/s vs base │
Decode/256-10 1.434Gi ± 0% 1.669Gi ± 0% +16.32% (p=0.000 n=180)
Decode/1024-10 1.521Gi ± 0% 1.790Gi ± 0% +17.69% (p=0.000 n=180)
Decode/4096-10 1.543Gi ± 0% 1.834Gi ± 0% +18.87% (p=0.000 n=180)
Decode/16384-10 1.550Gi ± 0% 1.846Gi ± 0% +19.08% (p=0.000 n=180)
geomean 1.512Gi 1.783Gi +17.98%
| dcda1b6a81011649d08db5093f38b12d1ac52f7d | [
{
"filename": "src/encoding/hex/hex.go",
"patch": "@@ -85,10 +85,10 @@ func DecodedLen(x int) int { return x / 2 }\n // If the input is malformed, Decode returns the number\n // of bytes decoded before the error.\n func Decode(dst, src []byte) (int, error) {\n-\ti, j := 0, 1\n-\tfor ; j < len(src); j += 2 {... |
huggingface/transformers | 45,243 | Nvidia CI with `torch 2.11` | # What does this PR do?
Use torch 2.11 for our (daily) CI since it's released for 2 weeks already.
For CircleCI, we need to fix something regarding `torchvision.io.read_video`.
For daily CI, torch 2.11 doesn't cause issues (for those `torchvision.io.read_video`). | b65b67c4e1ea82ef9b50020e38ae716edf8758aa | [
{
"filename": "docker/transformers-all-latest-gpu/Dockerfile",
"patch": "@@ -9,12 +9,12 @@ SHELL [\"sh\", \"-lc\"]\n # The following `ARG` are mainly used to specify the versions explicitly & directly in this docker file, and not meant\n # to be used as arguments for docker build (so far).\n \n-ARG PYTORCH=... |
vercel/next.js | 92,324 | ci: remove deploy examples workflow from build-and-deploy | We don't need to deploy a single image component example every time `build_and_deploy` runs. If something about this example changes, it can be manually re-deployed. | b4f31566f5297a908d0a1fbb3d096c8694b8425a | [
{
"filename": ".github/workflows/build_and_deploy.yml",
"patch": "@@ -563,31 +563,6 @@ jobs:\n - name: Publish\n run: cargo xtask workspace --publish\n \n- deployExamples:\n- if: ${{ needs.deploy-target.outputs.value != 'automated-preview' }}\n- name: Deploy examples\n- runs-on: ubun... |
electron/electron | 50,695 | fix: defer Wrappable destruction in SecondWeakCallback to a posted task | Backport of #50688
See that PR for details.
Notes: Fixed an intermittent `Invoke in DisallowJavascriptExecutionScope` crash on application quit when a `WebContents` (or other JS-emitting native object) is garbage-collected during shutdown.
| 1c1384c8a72629f2b3ebd81de13ce18e2896c622 | [
{
"filename": "shell/common/gin_helper/wrappable.cc",
"patch": "@@ -4,6 +4,7 @@\n \n #include \"shell/common/gin_helper/wrappable.h\"\n \n+#include \"base/task/sequenced_task_runner.h\"\n #include \"gin/object_template_builder.h\"\n #include \"gin/public/isolate_holder.h\"\n #include \"shell/common/gin_help... |
rust-lang/rust | 154,832 | Rollup of 2 pull requests | Successful merges:
- rust-lang/rust#150129 (`BorrowedCursor`: make `init` a boolean)
- rust-lang/rust#154830 (miri subtree update)
<!-- homu-ignore:start -->
r? @ghost
[Create a similar rollup](https://bors.rust-lang.org/queue/rust?prs=150129,154830)
<!-- homu-ignore:end -->
| f359441c73e947a6a47d5393bb4cd29e2c828257 | [
{
"filename": "Cargo.lock",
"patch": "@@ -3422,9 +3422,9 @@ dependencies = [\n \n [[package]]\n name = \"rustc-build-sysroot\"\n-version = \"0.5.12\"\n+version = \"0.5.13\"\n source = \"registry+https://github.com/rust-lang/crates.io-index\"\n-checksum = \"eec3905e8201688412f6f4b1f6c86d38b3ee6578f59ba85f413... |
ollama/ollama | 15,311 | Revert "enable flash attention for gemma4 (#15296)" | This reverts commit c8e0878814b4d19200d65571d3d2d35b4b48fd3e.
Fixes a performance regression - Perf run comparison:
```
│ /tmp/0.20.0.log │ /tmp/0.20.1-flash.log │ /tmp/0.20.1-no-flash.log │
│ token/sec │ token/sec vs base │ token/sec vs base │
Model/name=gemma4:e2b/step=prefill 3.035k ± 1% 1.227k ± 2% -59.57% (p=0.002 n=6) 3.040k ± 1% ~ (p=0.310 n=6)
Model/name=gemma4:e2b/step=generate 144.92 ± 3% 95.09 ± 6% -34.39% (p=0.002 n=6) 146.20 ± 2% +0.89% (p=0.026 n=6)
Model/name=gemma4:e4b/step=prefill 1508.1 ± 5% 827.4 ± 3% -45.14% (p=0.002 n=6) 1533.3 ± 2% ~ (p=0.180 n=6)
Model/name=gemma4:e4b/step=generate 93.55 ± 1% 66.75 ± 5% -28.64% (p=0.002 n=6) 94.75 ± 2% +1.29% (p=0.026 n=6)
Model/name=gemma4:26b/step=prefill 1497.5 ± 1% 689.7 ± 1% -53.95% (p=0.002 n=6) 1556.9 ± 1% +3.97% (p=0.002 n=6)
Model/name=gemma4:26b/step=generate 86.95 ± 3% 70.63 ± 4% -18.77% (p=0.002 n=6) 88.78 ± 1% +2.09% (p=0.009 n=6)
geomean 447.9 260.7 -41.80% 455.4 +1.67%
``` | 9e725f32dec43afe8730a5a000dc4c4c60e99b93 | [
{
"filename": "fs/ggml/ggml.go",
"patch": "@@ -890,7 +890,6 @@ func (f GGML) FlashAttention() bool {\n \treturn slices.Contains([]string{\n \t\t\"bert\",\n \t\t\"gemma3\",\n-\t\t\"gemma4\",\n \t\t\"glm4moelite\",\n \t\t\"glmocr\",\n \t\t\"gptoss\", \"gpt-oss\",",
"additions": 0,
"deletions": 1
}
] |
facebook/react | 36,180 | Add Flight SSR benchmark fixture | This PR adds a benchmark fixture for measuring the performance overhead of the React Server Components (RSC) Flight rendering compared to plain Fizz server-side rendering.
### Motivation
Performance discussions around RSC (e.g. #36143, #35125) have highlighted the need for reproducible benchmarks that accurately measure the cost that Flight adds on top of Fizz. This fixture provides multiple benchmark modes that can be used to track performance improvements across commits, compare Node vs Edge (web streams) overhead, and identify bottlenecks in Flight serialization and deserialization.
### What it measures
The benchmark renders a dashboard app with ~25 components (16 client components), 200 product rows with nested data (~325KB Flight payload), and ~250 Suspense boundaries in the async variant. It compares 8 render variants: Fizz-only and Flight+Fizz, across Node and Edge stream APIs, with both synchronous and asynchronous apps.
### Benchmark modes
- **`yarn bench`** runs a sequential in-process benchmark with realistic Flight script injection (tee + `TransformStream`/`Transform` buffered injection), matching what real frameworks do when inlining the RSC payload into the HTML response for hydration.
- **`yarn bench:bare`** runs the same benchmark without script injection, isolating the React-internal rendering cost. This is best for tracking changes to Flight serialization or Fizz rendering.
- **`yarn bench:server`** starts an HTTP server and uses `autocannon` to measure real req/s at `c=1` and `c=10`. The `c=1` results provide a clean signal for tracking React-internal changes, while `c=10` reflects throughput under concurrent load.
- **`yarn bench:concurrent`** runs an in-process concurrent benchmark with 50 in-flight renders via `Promise.all`, measuring throughput without HTTP overhead.
- **`yarn bench:profile`** collects CPU profiles via the V8 inspector and reports the top functions by self-time along with GC pause data.
- **`yarn start`** starts the HTTP server for manual browser testing. Appending `.rsc` to any Flight URL serves the raw Flight payload.
### Key findings during development
On Node 22, the Flight+Fizz overhead compared to Fizz-only rendering is roughly:
- **Without script injection** (`bench:bare`): ~2.2x for sync, ~1.3x for async
- **With script injection** (`bench:server`, c=1): ~2.9x for sync, ~1.8x for async
- **Edge vs Node** adds another ~30% for sync and ~10% for async, driven by the stream plumbing for script injection (tee + `TransformStream` buffering)
The async variant better represents real-world applications where server components fetch data asynchronously. Its lower overhead reflects the fact that Flight serialization and Fizz rendering can overlap with I/O wait times, making the added Flight cost a smaller fraction of total request time.
The benchmark also revealed that the Edge vs Node gap is negligible for Fizz-only rendering (~1-2%) but grows to ~15% for Flight+Fizz sync even without script injection. With script injection (tee + `TransformStream` buffering), the gap roughly doubles to ~30% for sync. The async variants show smaller gaps (~5% without, ~10% with injection).
| 268ef5505c6bf8362b51b4580661e324154a08b7 | [
{
"filename": "fixtures/flight-ssr-bench/README.md",
"patch": "@@ -27,7 +27,6 @@ yarn install\n | `yarn bench:profile` | CPU profiling via V8 inspector. Saves `.cpuprofile` files to `build/profiles/`. |\n | `yarn bench:server` | HTTP server benchmark using autocannon. Measures real req/s with TCP overhead. ... |
nodejs/node | 62,566 | doc: document TransformStream transformer.cancel option | Add documentation for the `cancel` option of the `TransformStream` transformer, which allows users to specify a callback that will be called when the stream is canceled.
See: https://streams.spec.whatwg.org/#transformer-api
Fixes: https://github.com/nodejs/node/issues/62540
<!--
Before submitting a pull request, please read:
- the CONTRIBUTING guide at https://github.com/nodejs/node/blob/HEAD/CONTRIBUTING.md
- the commit message formatting guidelines at
https://github.com/nodejs/node/blob/HEAD/doc/contributing/pull-requests.md#commit-message-guidelines
For code changes:
1. Include tests for any bug fixes or new features.
2. Update documentation if relevant.
3. Ensure that `make -j4 test` (UNIX), or `vcbuild test` (Windows) passes.
If you believe this PR should be highlighted in the Node.js CHANGELOG
please add the `notable-change` label.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
-->
| 868db3134db042c1b0c91a65fa9a6bfc515b5d80 | [
{
"filename": "doc/api/webstreams.md",
"patch": "@@ -5,11 +5,6 @@\n <!-- YAML\n added: v16.5.0\n changes:\n- - version:\n- - v21.5.0\n- - v20.14.0\n- pr-url: https://github.com/nodejs/node/pull/50126\n- description: Supports the `cancel` transformer callback.\n - version:\n - v21.0.0\n ... |
huggingface/transformers | 45,241 | Update tiny model creation script | # What does this PR do?
After the series of fixes in other previous PRs, we can now update the tiny model creation script. This update makes the script running without any failure, just 10 warnings.
There are many # TODO, some of them may just be quick remarks only. I decide to push and merge without removing them, so we still have the context to further improve the script to be more robust and clean.
The workflow file is also changed, so it could run on a daily basis, for us to check if there is any issue with the more future PRs merged into main. It doesn't upload the tiny models to the hub at this moment, which is a task for me to work on in a separate PR. | 4992ead60e0ef0719d9bb45060ee987efcda3ba6 | [
{
"filename": ".github/workflows/check_tiny_models.yml",
"patch": "@@ -10,6 +10,7 @@ on:\n \r\n env:\r\n TOKEN: ${{ secrets.TRANSFORMERS_HUB_BOT_HF_TOKEN }}\r\n+ HF_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}\r\n \r\n jobs:\r\n check_tiny_models:\r\n@@ -22,61 +23,35 @@ jobs:\n fetch-depth: 2\r\n ... |
electron/electron | 50,693 | fix: defer Wrappable destruction in SecondWeakCallback to a posted task | Backport of #50688
See that PR for details.
Notes: Fixed an intermittent `Invoke in DisallowJavascriptExecutionScope` crash on application quit when a `WebContents` (or other JS-emitting native object) is garbage-collected during shutdown.
| 562780086138c8e4be6a3ba0f2a21063883ed8a9 | [
{
"filename": "shell/common/gin_helper/wrappable.cc",
"patch": "@@ -4,6 +4,7 @@\n \n #include \"shell/common/gin_helper/wrappable.h\"\n \n+#include \"base/task/sequenced_task_runner.h\"\n #include \"gin/object_template_builder.h\"\n #include \"gin/public/isolate_holder.h\"\n #include \"shell/common/gin_help... |
golang/go | 78,398 | net: make SplitHostPort alloc-free in more cases | Because SplitHostPort is not inlineable, its error result cannot benefit
from mid-stack inlining and must be moved to the heap. However, callers
of SplitHostPort that only use its error result in nil checks are pretty
typical, and their performance needlessly suffers from the resulting
allocation.
Drawing inspiration from CL 734440, this CL refactors SplitHostPort to
an inlineable wrapper around a function that returns, not an error, but
the concrete type of SplitHostPort's error result. Consequently, that
error result can stay on the stack in cases where callers only use it in
nil checks, and SplitHostPort is now allocation-free for such callers.
To clarify: the goal of this CL is to make SplitHostPort
allocation-free, not simply in non-error cases, but also in error cases
in which the error result is only used in nil checks. | ca9b9c915e8d8f44f8153dfdc667bf7ea829a339 | [
{
"filename": "src/cmd/compile/internal/test/inl_test.go",
"patch": "@@ -185,6 +185,7 @@ func TestIntendedInlining(t *testing.T) {\n \t\t},\n \t\t\"net\": {\n \t\t\t\"(*UDPConn).ReadFromUDP\",\n+\t\t\t\"SplitHostPort\",\n \t\t},\n \t\t\"sync\": {\n \t\t\t// Both OnceFunc and its returned closure need to be ... |
rust-lang/rust | 154,830 | miri subtree update | Subtree update of `miri` to https://github.com/rust-lang/miri/commit/ce20bd38b1a5361dee26cac090e7f74fc0530d4b.
Created using https://github.com/rust-lang/josh-sync.
r? @ghost | 21256d86e49b50364337ba7285ffaa2d90ae9609 | [
{
"filename": "Cargo.lock",
"patch": "@@ -3422,9 +3422,9 @@ dependencies = [\n \n [[package]]\n name = \"rustc-build-sysroot\"\n-version = \"0.5.12\"\n+version = \"0.5.13\"\n source = \"registry+https://github.com/rust-lang/crates.io-index\"\n-checksum = \"eec3905e8201688412f6f4b1f6c86d38b3ee6578f59ba85f413... |
vercel/next.js | 92,323 | Add RTL text support to @vercel/og | ## What?
Added support for right-to-left (RTL) text rendering in the `@vercel/og` library. This includes:
1. **Font substitution improvements**: Added `contextSubstitutionFormat3` function to handle OpenType context substitution format 3, which is required for proper RTL text shaping in languages like Arabic and Hebrew.
2. **RTL text detection and positioning**: Implemented RTL detection using Unicode ranges for Arabic, Hebrew, and related scripts. When RTL text is detected, the text positioning and alignment logic is adjusted accordingly to properly render right-to-left content.
3. **Alignment handling for RTL**: Updated text alignment logic to correctly handle `right`, `end`, `center`, `left`, and `justify` alignments when rendering RTL text.
## Why?
The `@vercel/og` library previously did not properly support RTL languages, which are used by millions of people worldwide. This fix enables proper rendering of Arabic, Hebrew, and other RTL scripts in Open Graph images generated with Next.js.
## How?
- Updated both `index.edge.js` and `index.node.js` compiled files with RTL support
- Added RTL regex pattern to detect RTL Unicode characters
- Modified text positioning calculations to account for RTL layout
- Added proper handling of text alignment in RTL context
- Added test routes for both edge and node runtimes with Arabic text examples
- Added e2e tests to verify RTL rendering works correctly
## Test Plan
Added e2e tests in `test/e2e/og-api/index.test.ts`:
- Test for RTL Arabic text rendering in edge runtime
- Test for RTL Arabic text rendering in node runtime
Both tests verify that the image is generated successfully with proper content-type and non-zero size.
https://claude.ai/code/session_0192MAXgejpjkKBgChuyTFcd | 9dd2f2ff2f47edf3c51c681b417a0f443e842410 | [
{
"filename": "packages/next/src/compiled/@vercel/og/index.edge.js",
"patch": "@@ -9854,6 +9854,12 @@ function contextSubstitutionFormat3(contextParams, subtable) {\n if (substitution) {\n substitutions.push(substitution);\n }\n+ } else if (substitutionType === \"21\")... |
nodejs/node | 62,564 | src: restrict MaybeStackBuffer string helpers to text types | Limit MaybeStackBuffer::ToString() and ToStringView to textual element types so byte buffers do not instantiate std::basic_string[_view]<unsigned char> on libc++.
This avoids the macOS/Xcode deprecation warning for `char_traits<unsigned char>` while preserving existing string helper behavior for text buffers. `uint16_t` buffers are mapped to char16_t for these conversions so UTF-16 call sites continue to work. | 69be72f1d805fc60bb4293bd21f0f9d756bab640 | [
{
"filename": "src/util.h",
"patch": "@@ -390,6 +390,16 @@ constexpr size_t strsize(const T (&)[N]) {\n return N - 1;\n }\n \n+template <typename T>\n+inline constexpr bool kMaybeStackBufferHasStringType =\n+ std::is_same_v<T, char> || std::is_same_v<T, wchar_t> ||\n+ std::is_same_v<T, char8_t> || s... |
ollama/ollama | 15,306 | model/parsers: rework gemma4 tool call handling | Replace the custom Gemma4 argument normalizer with a stricter reference-style conversion: preserve Gemma-quoted strings, quote bare keys, and then unmarshal the result as JSON.
This keeps quoted scalars as strings, preserves typed unquoted values, and adds test coverage for malformed raw-quoted inputs that the reference implementation rejects. | c6b78c7984ae10e2dc609d916452e7bc834331d7 | [
{
"filename": "model/parsers/gemma4.go",
"patch": "@@ -4,6 +4,7 @@ import (\n \t\"encoding/json\"\n \t\"errors\"\n \t\"log/slog\"\n+\t\"regexp\"\n \t\"strings\"\n \t\"unicode\"\n \n@@ -25,6 +26,11 @@ const (\n \tgemma4ToolCallCloseTag = \"<tool_call|>\"\n )\n \n+var (\n+\tgemma4QuotedStringRe = regexp.MustC... |
huggingface/transformers | 45,238 | Update `get_test_info.py` (related to tiny model creation) | # What does this PR do?
We have introduced `CausalLMModelTest` for some time, but haven't update `get_test_info.py` accordingly, which causes some issues, in particularly for tiny model creation, regarding the part of the attribute `all_model_classes`. See code change for more details, which is itself clear. | 60da0c1892ce53b59526a7e284c0cb106d50cf88 | [
{
"filename": "utils/get_test_info.py",
"patch": "@@ -15,6 +15,7 @@\n import importlib\n import os\n import sys\n+import unittest\n \n \n # This is required to make the module import works (when the python process is running from the root of the repo)\n@@ -87,11 +88,19 @@ def get_test_classes(test_file):\n ... |
facebook/react | 36,179 | Test branch react fork | <!--
Thanks for submitting a pull request!
We appreciate you spending the time to work on these changes. Please provide enough information so that others can review your pull request. The three fields below are mandatory.
Before submitting a pull request, please make sure the following is done:
1. Fork [the repository](https://github.com/facebook/react) and create your branch from `main`.
2. Run `yarn` in the repository root.
3. If you've fixed a bug or added code that should be tested, add tests!
4. Ensure the test suite passes (`yarn test`). Tip: `yarn test --watch TestName` is helpful in development.
5. Run `yarn test --prod` to test in the production environment. It supports the same options as `yarn test`.
6. If you need a debugger, run `yarn test --debug --watch TestName`, open `chrome://inspect`, and press "Inspect".
7. Format your code with [prettier](https://github.com/prettier/prettier) (`yarn prettier`).
8. Make sure your code lints (`yarn lint`). Tip: `yarn linc` to only check changed files.
9. Run the [Flow](https://flowtype.org/) type checks (`yarn flow`).
10. If you haven't already, complete the CLA.
Learn more about contributing: https://reactjs.org/docs/how-to-contribute.html
-->
## Summary
<!--
Explain the **motivation** for making this change. What existing problem does the pull request solve?
-->
## How did you test this change?
<!--
Demonstrate the code is solid. Example: The exact commands you ran and their output, screenshots / videos if the pull request changes the user interface.
How exactly did you verify that your PR solves the issue you wanted to solve?
If you leave this empty, your PR will very likely be closed.
-->
| 1f6bd0a582afa8b37708b5b001e6582f126cfb53 | [
{
"filename": "README.md",
"patch": "@@ -73,7 +73,7 @@ Read our [contributing guide](https://legacy.reactjs.org/docs/how-to-contribute.\n \n To help you get your feet wet and get you familiar with our contribution process, we have a list of [good first issues](https://github.com/facebook/react/labels/good%2... |
electron/electron | 50,690 | fix: dangling raw_ptr MicrotasksRunner::isolate_ | Backport of #50676
See that PR for details.
Notes: none. | ce2b8187e543ac48a79a6ed4530427714c3bd9ed | [
{
"filename": "shell/browser/javascript_environment.cc",
"patch": "@@ -86,6 +86,7 @@ JavascriptEnvironment::~JavascriptEnvironment() {\n // Otherwise cppgc::internal::Sweeper::Start will try to request a task runner\n // from the NodePlatform with an already unregistered isolate.\n locker_.reset();\n+... |
golang/go | 78,348 | runtime: truncate trace strings before inserting into trace map | traceStringTable.put inserted the full user-supplied string into the
trace map, then only truncated it to MaxEventTrailerDataSize (1024
bytes) when writing to the trace buffer. If the string exceeded the
traceRegionAlloc block size (~64KB), this caused a fatal
"traceRegion: alloc too large" crash.
Move the truncation to the top of put, before the map insertion, so
that the map key, map entry, and written output are all consistent
and bounded.
The existing truncation in writeString is retained: the emit method
also calls writeString without going through the map, so writeString
still needs its own guard.
TestStartRegionLongString reproduces the crash before the fix.
Observed in production at CockroachDB: Stopper.RunTask passes
singleflight keys (up to ~450KB) as trace region names via
trace.StartRegion. See:
https://github.com/cockroachdb/cockroach/pull/166669 for context
on the trigger. | 3c53061685d5237f9f2fc4522fce6d774776fede | [
{
"filename": "src/runtime/trace/annotation_test.go",
"patch": "@@ -8,9 +8,22 @@ import (\n \t\"context\"\n \t\"io\"\n \t. \"runtime/trace\"\n+\t\"strings\"\n \t\"testing\"\n )\n \n+func TestStartRegionLongString(t *testing.T) {\n+\t// Regression test: a region name longer than the trace region\n+\t// alloc... |
vercel/next.js | 92,321 | Reapply "simplify session dependent tasks and add TTL support (#91729)" | <!-- Thanks for opening a PR! Your contribution is much appreciated.
To make sure your PR is handled as smoothly as possible we request that you follow the checklist sections below.
Choose the right checklist for the change(s) that you're making:
## For Contributors
### Improving Documentation
- Run `pnpm prettier-fix` to fix formatting issues before opening the PR.
- Read the Docs Contribution Guide to ensure your contribution follows the docs guidelines: https://nextjs.org/docs/community/contribution-guide
### Fixing a bug
- Related issues linked using `fixes #number`
- Tests added. See: https://github.com/vercel/next.js/blob/canary/contributing/core/testing.md#writing-tests-for-nextjs
- Errors have a helpful link attached, see https://github.com/vercel/next.js/blob/canary/contributing.md
### Adding a feature
- Implements an existing feature request or RFC. Make sure the feature request has been accepted for implementation before opening a PR. (A discussion must be opened, see https://github.com/vercel/next.js/discussions/new?category=ideas)
- Related issues/discussions are linked using `fixes #number`
- e2e tests added (https://github.com/vercel/next.js/blob/canary/contributing/core/testing.md#writing-tests-for-nextjs)
- Documentation added
- Telemetry added. In case of a feature if it's used or not.
- Errors have a helpful link attached, see https://github.com/vercel/next.js/blob/canary/contributing.md
## For Maintainers
- Minimal description (aim for explaining to someone not on the team to understand the PR)
- When linking to a Slack thread, you might want to share details of the conclusion
- Link both the Linear (Fixes NEXT-xxx) and the GitHub issues
- Add review comments if necessary to explain to the reviewer the logic behind a change
### What?
### Why?
### How?
Closes NEXT-
Fixes #
-->
| a88b102e3ecdc6f2c6fc2a72d5a1bd2cb0d92f70 | [
{
"filename": "turbopack/crates/turbo-tasks-fetch/src/client.rs",
"patch": "@@ -3,7 +3,7 @@ use std::{hash::Hash, sync::LazyLock};\n use anyhow::Result;\n use quick_cache::sync::Cache;\n use turbo_rcstr::RcStr;\n-use turbo_tasks::{ReadRef, Vc, duration_span, mark_session_dependent};\n+use turbo_tasks::{Comp... |
End of preview. Expand in Data Studio
GitHub Issues + Fixes Dataset
A curated, high-signal dataset of GitHub issues collected from 25 popular open-source repositories.
Each example pairs a real GitHub issue with the exact code changes (diffs) that resolved it.
The dataset is designed for:
- Automated bug fixing
- LLM-based code agents
- Issue → patch generation
- Program repair research
How the data was extracted
The data was collected using the GitHub REST API and processed into a structured format.
To maintain quality and usefulness:
- Only closed issues were considered
- Each issue must have a clearly associated fix
- Fixes are stored as unified diffs extracted from the resolving commit
- Low-signal issues (questions, duplicates, discussions) were filtered out
- Issues without meaningful code changes were excluded
Each row represents one issue–fix pair.
Dataset structure
Each dataset entry has the following schema:
{
"repo": "owner/repository",
"issue_number": 12345,
"issue_title": "Short description of the problem",
"issue_body": "Full issue discussion and problem description",
"commit_sha": "abcdef123456...",
"files": [
{
"filename": "path/to/file.ext",
"patch": "unified diff showing the fix",
"additions": 10,
"deletions": 2
}
]
}
| Field | Description |
|---|---|
repo |
GitHub repository where the issue originated |
issue_number |
Original GitHub issue number |
issue_title |
Title of the issue |
issue_body |
Full issue description and context |
commit_sha |
Commit that fixed the issue |
files |
List of modified files |
files[].filename |
Path of the modified file |
files[].patch |
Unified diff representing the fix |
files[].additions |
Number of added lines |
files[].deletions |
Number of removed lines |
Supported languages
The dataset contains fixes across multiple programming languages, including (but not limited to):
- C / C++
- Python
- JavaScript / TypeScript
- Rust
- Go
- Java
- Assembly (very rare)
Language distribution varies by repository.
Intended use cases
This dataset is well-suited for:
- Training models to generate code patches from issue descriptions
- Evaluating LLM reasoning over real-world bug reports
- Building autonomous debugging or refactoring agents
- Research on program repair, code synthesis, and software maintenance
It is not intended for:
- Issue classification
- sentiment analysis
- Chatbot fine-tuning without code generation
Limitations
- The dataset reflects real-world noise from GitHub issues
- Issue descriptions vary widely in clarity and detail
- Some fixes involve refactoring or design changes rather than minimal patches
- No guarantee that all fixes are optimal or best practice
Warning: This dataset currently has the issues of 10/25 repos and 14k rows but is expected to have 50k rows and 2 GB in size
- Downloads last month
- 31