Datasets:
text
stringlengths 32
32
|
---|
8314f0c299d79625580da9614190c3b6
|
62394b368ec663ce2b2cef72aa0a0967
|
d2383b2f74f11ef2f3ef8ab1b31faddd
|
60bf23a4a214b489afbc812b4fe977bb
|
97042199b004d4975cc409001a8d27a8
|
156b884b7eefb7a8dce3e03ca06e35ee
|
bc72d6ec88141cad5734271b6e1e25a7
|
7a0d7bd15a6f48008d894d983de0d34c
|
8c1a03b65903255817d7b85c9121fcf0
|
218f1fdaa654abde81087c38fc6a62bd
|
5b7e18d35899b0658a98861bc722fcfd
|
f1b6ad505d576ef37c750030f8cc0171
|
aae1255b851cc02f77d4a457b1b5f24d
|
b39efe7268af9bba92b2bbc424382159
|
83594aaef01d87ba49d2f856365ac5dc
|
0aba8b2825ead1f6c52ff70863676f8d
|
c94d218041b3c29c59aeea5208767f13
|
b523ac798f978366de7fcde355ebf89a
|
235f98786894d0bb98b546bdbeac2684
|
806bda6fcf4a36dab3d7f09726578f43
|
2834ec74a9b211e2c6c902f5737d05a3
|
7632f5bdf1fb05914a0ceb2928638125
|
2e4870c2fc8dd7f65d1cbdd65515959e
|
ccf97cc3854caac90ac643beac818115
|
1d450883888db2ac255ad17eb1026f2e
|
d301e6dc57bbe12c103f298ad2d0e08b
|
a4cdde4cfb6413f6fd16ebe806f6bcd8
|
9fb84d8967a53c953caaa19d4a564d28
|
834d170378750a087d23696161124f7a
|
78e2f4cfd12b921ed9c8a1ad5f9de4fa
|
868a6eec7118c81d1a4381a50396e856
|
c110fd38cf9d69fda61d7d949062e251
|
491424a2db491ef5d4f331570c760a05
|
63237d09f84c654c6f49e89669dbead9
|
16c0f9f4724ff971d98acb369eaac47e
|
7cc46cc08c5d6bd55f82af75a07e9b16
|
cc5848a2b35dd5410ca9aff4d6e8847e
|
e3b6d0aec6f0b850d3f29bbf898bae07
|
e216de8cc2c80a99f8dbda1c5a0c6495
|
602d20c5308018002abf990aed19db2b
|
7743588dac8aa7b5dafcc58cbcd98995
|
84a5323f3b6aedd50c82f2ffff6aec18
|
51b13b03d48c048125f0d068b56c7616
|
b52fd993ba0d2949c1abbb4023af47f4
|
7f0bca9e17587cd163a40336c3bde68d
|
6bc0929902b20a0591251f5c24457901
|
e20787766f0fa6df97f04e1e4fd48382
|
c67f761ba7a6a4b306f458e01471a08f
|
0c4081366b8da85846e124251cb50fc6
|
780df586649212169784a361f156945c
|
2ee56b66e24c465696c0ee2a8279f483
|
6b6c56915e6b9e3359a62d9ba31e00e8
|
457f8b803af869e859b6b4e75e05740c
|
11ed62299e8c443e1873cfd54e9f50ea
|
4971a13e9c6dccb74b8b14c8b61ad5ad
|
a864210862bc208f741bd3415591d07a
|
1b8d4e0512f1125be254e5d368938b3d
|
77796c05eb44e9c874716d5f59249fde
|
76f825afb2485a75b370aa89b9a4aee2
|
9e084a783c1e6f80482ca5021f7c0d59
|
e3b3242b7c218a4b8b6e7e09891ebcbd
|
a42b82de6dfaee41f51232352573c893
|
881bedbca889ed4e622391f63f097b4b
|
0cdafdbaaca53f1beb0c0c47f8b07c39
|
29d59b2d525dbc0f4f69fa1436334285
|
b148996feb8fbbffeaef0d2ad34c952b
|
60233a82737df62738b1e17ded15f099
|
86bf18624aafc2d62016db19ea7356d6
|
ffc75e7029663e788641660876b74ca9
|
9140c7de07ef9a8df09e9f0a5807bc1d
|
aa59b708b845e47598ff164eaf4154ea
|
7da64132228df152d9e36d9ac260f37a
|
b3fb7a7b1b838b4e35ec6557b5814834
|
e13501fc9ca23b7889d6339be6e08429
|
7f02e73222ba2fa871a5ba9cdc6aa49e
|
c947cc8f264749e40a7dce1b5eca89b0
|
ae12a4db1b0a6cfb7e7c3e178e12e7ba
|
3baaecb35d463023407f35b42e0fb991
|
d513e6f38d5b2f34d818c6c0786e3659
|
64a823fc20aff9b6a988f98ad80a8db4
|
ebebaf0ee74891d730c28b9c5bd25bb7
|
bce67692a6d2d2b25161ad191cec6b5a
|
d58739adec46f5066c7fc71158023013
|
ea0d591ea38f23e6109cea67d36ce9a0
|
57e72bae79194eb95614394716162b13
|
b7c7d1d94068d3b7274343bb5e19968b
|
1052f547fb62dbf9b16fc0da957f1464
|
2987f8e8566acb2385bbabcafd08e520
|
fc07087ed2717237ec5af6761d8e12a9
|
1272666241a0f955de659cd241065342
|
b3f7889f75034ab22794f6f6083ee812
|
511501dab991daae0df87b4528d92fe7
|
71195d21f20da193ee2611a3d4a431a8
|
f80630dc93fd79d6e2ceac008e3ad693
|
7be8c87f4ad8b03f854a1f56de3d765e
|
a3050c2246ab240183b50decd7bd0042
|
f2ff7dffcc0a3d42fe38e20de2315c2d
|
a189fbff3d4abc026bc2689644ff7d8f
|
6d2df0ec9e9a66973b3ef484ddefd6a1
|
119b82b9c43f553750f16a531db9a26a
|
Dataset Card for DataComp-12M
This dataset contains UIDs of DataComp-12M that is a 12M subset of DataComp-1B-BestPool. Image-text models trained on DataComp-12M are significantly better than on CC-12M/YFCC-15M as well as DataComp-Small/Medium. For details on this dataset and the improved DataCompDR-12M, please visit our MobileCLIP paper. The dataset with the original captions is now available at mlfoundations/DataComp-12M. The UIDs per shards match between mlfoundations/DataComp-12M and apple/DataCompDR-12M.
Dataset Details
Dataset Description
DataCompDR is an image-text dataset and an enhancement to the DataComp dataset.
We reinforce the DataComp dataset using our multi-modal dataset reinforcement strategy.
In particular, we create DataCompDR-1B and DataCompDR-12M by reinforcing the DataComp-1B (BestPool filtering) and a uniform subset of 12.8M samples, DataCompDR-12M.
We have a one-time generation process, the cost of which is amortized over multiple architectures and extensive ablations.
We generate 5 synthetic captions per image using the coca_ViT-L-14
model in OpenCLIP, and strong random image augmentations (10 for DataCompDR-1B and 30 for DataCompDR-12M).
We compute embeddings of an ensemble of two strong teachers (ViT-L-14
with pretrained weights datacomp_xl_s13b_b90k
and openai in OpenCLIP) on augmented images as well as real and synthetic captions.
Embeddings are 1536-D concatenations of 2x768-D vectors.
One seen sample for DataCompDR is a triplet of one randomly augmented image, one ground-truth caption, and one randomly picked synthetic caption.
- Curated by: Original data by DataComp and metadata by Apple.
- License: We distribute our metadata under our license. The original image url-text samples and metadata were released by DataComp under Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
- Repository: ml-mobileclip GitHub
- Paper: MobileCLIP paper
- Demo: Coming Soon
Uses
Training with DataCompDR shows significant learning efficiency improvement compared to the standard CLIP training. For example, with a single node of 8×A100 GPUs, we achieve 61.7% zero-shot classification on ImageNet-val in approximately one day when training a ViT-B/16 based CLIP from scratch on DataCompDR-12M. Training with DataCompDR-1B sets new state-of-the-art performance on several metrics (Fig. 2) while still using a fraction of the training compute budget compared to previous works. Using DataCompDR, we demonstrate 10x-1000x learning efficiency in comparison to DataComp.
Dataset Structure
- uids.txt: List of 12779520 (65536*195) UIDs, one UID per line.
- uids.npy: List of 12779520 (65536*195) UIDs as a NumPy array of type `numpy.dtype("u8,u8")`.
Citation
MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training. (CVPR 2024) Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
@InProceedings{mobileclip2024,
author = {Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel},
title = {MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
}
- Downloads last month
- 155