The Registry of Open Data on AWS is now available on AWS Data Exchange
All datasets on the Registry of Open Data are now discoverable on AWS Data Exchange alongside 3,000+ existing data products from category-leading data providers across industries. Explore the catalog to find open, free, and commercial data sets. Learn more about AWS Data Exchange

About

This registry exists to help people discover and share datasets that are available via AWS resources. See recent additions and learn more about sharing data on AWS.

See all usage examples for datasets listed in this registry tagged with natural language processing.


Search datasets (currently 13 matching datasets)

You are currently viewing a subset of data tagged with natural language processing.


Add to this registry

If you want to add a dataset or example of how to use a dataset to this registry, please follow the instructions on the Registry of Open Data on AWS GitHub repository.

Unless specifically stated in the applicable dataset documentation, datasets available through the Registry of Open Data on AWS are not provided and maintained by AWS. Datasets are provided and maintained by a variety of third parties under a variety of licenses. Please check dataset licenses and related documentation to determine if a dataset may be used for your application.


Tell us about your project

If you have a project using a listed dataset, please tell us about it. We may work with you to feature your project in a blog post.

Common Crawl

encyclopedicinternetnatural language processingweb archive

A corpus of web crawl data composed of over 50 billion web pages.

Details →

Usage examples

See 35 usage examples →

Sudachi Language Resources

natural language processing

Japanese dictionaries and pre-trained models (word embeddings and language models) for natural language processing. SudachiDict is the dictionary for a Japanese tokenizer (morphological analyzer) Sudachi. chiVe is Japanese pretrained word embeddings (word vectors), trained using the ultra-large-scale web corpus NWJC by National...

Details →

Usage examples

See 20 usage examples →

Synthea synthetic patient generator data in OMOP Common Data Model

bioinformaticshealthlife sciencesnatural language processingus

The Synthea generated data is provided here as a 1,000 person (1k), 100,000 person (100k), and 2,800,000 persom (2.8m) data sets in the OMOP Common Data Model format. SyntheaTM is a synthetic patient generator that models the medical history of synthetic patients. Our mission is to output high-quality synthetic, realistic but not real, patient data and associated health records covering every aspect of healthcare. The resulting data is free from cost, privacy, and security restrictions. It can be used without restriction for a variety of secondary uses in academia, research, industry, and gov...

Details →

Usage examples

See 4 usage examples →

MIMIC-III (‘Medical Information Mart for Intensive Care’)

bioinformaticshealthlife sciencesnatural language processingus

MIMIC-III (‘Medical Information Mart for Intensive Care’) is a large, single-center database comprising information relating to patients admitted to critical care units at a large tertiary care hospital. Data includes vital signs, medications, laboratory measurements, observations and notes charted by care providers, fluid balance, procedure codes, diagnostic codes, imaging reports, hospital length of stay, survival data, and more. The database supports applications including academic and industrial research, quality improvement initiatives, and higher education coursework. The MIMIC-I...

Details →

Usage examples

See 3 usage examples →

REDASA COVID-19 Open Data

coronavirusCOVID-19information retrievallife sciencesnatural language processingtext analysis

The REaltime DAta Synthesis and Analysis (REDASA) COVID-19 snapshot contains the output of the curation protocol produced by our curator community. A detailed description can be found in our paper. The first S3 bucket listed in Resources contains a large collection of medical documents in text format extracted from the CORD-19 dataset, plus other sources deemed relevant by the REDASA consortium. The second S3 bucket contains a series of documents surfaced by Amazon Kendra that were considered relevant for each medical question asked. The final S3 bucket contains the GroundTruth annotations cr...

Details →

Usage examples

See 2 usage examples →

CMS 2008-2010 Data Entrepreneurs’ Synthetic Public Use File (DE-SynPUF) in OMOP Common Data Model

amazon.sciencebioinformaticshealthlife sciencesnatural language processingus

DE-SynPUF is provided here as a 1,000 person (1k), 100,000 person (100k), and 2,300,000 persom (2.3m) data sets in the OMOP Common Data Model format. The DE-SynPUF was created with the goal of providing a realistic set of claims data in the public domain while providing the very highest degree of protection to the Medicare beneficiaries’ protected health information. The purposes of the DE-SynPUF are to:

  1. allow data entrepreneurs to develop and create software and applications that may eventually be applied to actual CMS claims data;
  2. train researchers on the use and complexity of conducting analyses with CMS claims data prior to initiating the process to obtain access to actual CMS data; and,
  3. support safe data mining innovations that may reveal unan...

    Details →

    Usage examples

    See 4 usage examples →

Common Screens

encyclopedicinternetnatural language processing

A corpus of web screenshot and metadata data composed of over 70 million websites.

Details →

Usage examples

See 1 usage example →

Discrete Reasoning Over the content of Paragraphs (DROP)

machine learningnatural language processing

The DROP dataset contains 96k Question and Answer pairs (QAs) over 6.7K paragraphs, split between train (77k QAs), development (9.5k QAs) and a hidden test partition (9.5k QAs).

Details →

Usage examples

See 1 usage example →

End of Term Web Archive Dataset

archivesinternetnatural language processingweb archive

The End of Term Web Archive (EOT) captures and saves U.S. Government websites at the end of presidential administrations. The EOT has thus far preserved websites from administration changes in 2008, 2012, 2016, and 2020. Data from these web crawls have been made openly available in several formats in this dataset.

Details →

Usage examples

See 1 usage example →

MultiCoNER Datasets

natural language processing

MultiCoNER 1 is a large multilingual dataset (11 languages) for Named Entity Recognition. It is designed to represent some of the contemporary challenges in NER, including low-context scenarios (short and uncased text), syntactically complex entities such as movie titles, and long-tail entity distributions. MultiCoNER 2 is a large multilingual dataset (12 languages) for fine grained Named Entity Recognition. Its fine-grained taxonomy contains 36 NE classes, representing real-world challenges for NER, where named entities, apart from the surface form, context represents a critical role in disti...

Details →

Usage examples

See 4 usage examples →

Quoref

machine learningnatural language processing

24K Question/Answer (QA) pairs over 4.7K paragraphs, split between train (19K QAs), development (2.4K QAs) and a hidden test partition (2.5K QAs).

Details →

Usage examples

See 1 usage example →

Reasoning Over Paragraph Effects in Situations (ROPES)

jsonmachine learningnatural language processing

14k QA pairs over 1.7K paragraphs, split between train (10k QAs), development (1.6k QAs) and a hidden test partition (1.7k QAs).

Details →

Usage examples

See 1 usage example →

AI2 TabMCQ: Multiple Choice Questions aligned with the Aristo Tablestore

machine learningnatural language processing

9092 crowd-sourced science questions and 68 tables of curated facts

Details →

AI2 Tablestore (November 2015 Snapshot)

machine learningnatural language processing

68 tables of curated facts

Details →

Aristo Tuple KB

machine learningnatural language processing

294,000 science-relevant tuples

Details →

Gretel Synthetic Safety Alignment Dataset

ai safetymachine learningnatural language processingsynthetic data

A comprehensive dataset designed for aligning language models with safety and ethical guidelines. Contains 8,361 curated triplets of prompts, responses, and safe responses across various risk categories. Each entry includes safety scores, judge reasoning, and harm probability assessments, making it valuable for model alignment, testing, and benchmarking.

Details →

Usage examples

See 3 usage examples →

NLP - fast.ai datasets

deep learningmachine learningnatural language processing

Some of the most important datasets for NLP, with a focus on classification, including IMDb, AG-News, Amazon Reviews (polarity and full), Yelp Reviews (polarity and full), Dbpedia, Sogou News (Pinyin), Yahoo Answers, Wikitext 2 and Wikitext 103, and ACL-2010 French-English 10^9 corpus. This is part of the fast.ai datasets collection hosted by AWS for convenience of fast.ai students. See documentation link for citation and license details for each dataset.

Details →

Provision of Web-Scale Parallel Corpora for Official European Languages (ParaCrawl)

machine translationnatural language processing

ParaCrawl is a set of large parallel corpora to/from English for all official EU languages by a broad web crawling effort. State-of-the-art methods are applied for the entire processing chain from identifying web sites with translated text all the way to collecting, cleaning and delivering parallel corpora that are ready as training data for CEF.AT and translation memories for DG Translation.

Details →

The Massively Multilingual Image Dataset (MMID)

computer visionmachine learningmachine translationnatural language processing

MMID is a large-scale, massively multilingual dataset of images paired with the words they represent collected at the University of Pennsylvania. The dataset is doubly parallel: for each language, words are stored parallel to images that represent the word, and parallel to the word's translation into English (and corresponding images.)

Details →

ZEST: ZEroShot learning from Task descriptions

machine learningnatural language processing

ZEST is a benchmark for zero-shot generalization to unseen NLP tasks, with 25K labeled instances across 1,251 different tasks.

Details →

ABEJA CC JA

internetjapanesenatural language processingweb archive

A large Japanese language corpus created through preprocessing Common Crawl data

Details →

Usage examples

See 2 usage examples →

Amazon-PQA

amazon.sciencemachine learningnatural language processing

Amazon product questions and their answers, along with the public product information.

Details →

Usage examples

See 1 usage example →

Answer Reformulation

amazon.sciencemachine learningnatural language processing

Original StackExchange answers and their voice-friendly Reformulation.

Details →

Usage examples

See 1 usage example →

Automatic Speech Recognition (ASR) Error Robustness

amazon.sciencedeep learningmachine learningnatural language processingspeech recognition

Sentence classification datatasets with ASR Errors.

Details →

Usage examples

See 1 usage example →

DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented Dialogue

amazon.scienceconversation datamachine learningnatural language processing

This bucket contains the checkpoints used to reproduce the baseline results reported in the DialoGLUE benchmark hosted on EvalAI (https://evalai.cloudcv.org/web/challenges/challenge-page/708/overview). The associated scripts for using the checkpoints are located here: https://github.com/alexa/dialoglue. The associated paper describing the benchmark and checkpoints is here: https://arxiv.org/abs/2009.13570. The provided checkpoints include the CONVBERT model, a BERT-esque model trained on a large open-domain conversational dataset. It also includes the CONVBERT-DG and BERT-DG checkpoints descri...

Details →

Usage examples

See 1 usage example →

Enriched Topical-Chat Dataset for Knowledge-Grounded Dialogue Systems

amazon.scienceconversation datamachine learningnatural language processing

This dataset provides extra annotations on top of the publicly released Topical-Chat dataset(https://github.com/alexa/Topical-Chat) which will help in reproducing the results in our paper "Policy-Driven Neural Response Generation for Knowledge-Grounded Dialogue Systems" (https://arxiv.org/abs/2005.12529?context=cs.CL). The dataset contains 5 files: train.json, valid_freq.json, valid_rare.json, test_freq.json and test_rare.json. Each of these files will have additional annotations on top of the original Topical-Chat dataset. These specific annotations are: dialogue act annotations a...

Details →

Usage examples

See 1 usage example →

Helpful Sentences from Reviews

amazon.scienceinformation retrievaljsonnatural language processingtext analysis

A collection of sentences extracted from customer reviews labeled with their helpfulness score.

Details →

Usage examples

See 1 usage example →

Humor Detection from Product Question Answering Systems

amazon.sciencemachine learningnatural language processing

This dataset provides labeled humor detection from product question answering systems. The dataset contains 3 csv files: Humorous.csv containing the humorous product questions, Non-humorous-unbiased.csv containing the non-humorous prodcut questions from the same products as the humorous one, and, Details →

Usage examples

See 1 usage example →

Humor patterns used for querying Alexa traffic

amazon.sciencedialogmachine learningnatural language processing

Humor patterns used for querying Alexa traffic when creating the taxonomy described in the paper "“Alexa, Do You Want to Build a Snowman?” Characterizing Playful Requests to Conversational Agents" by Shani C., Libov A., Tolmach S., Lewin-Eytan L., Maarek Y., and Shahaf D. (CHI LBW 2022). These patterns corrospond to the researchers' hypotheses regarding what humor types are likely to appear in Alexa traffic. These patterns were used for querying Alexa traffic to evaluate these hypotheses.

Details →

Usage examples

See 1 usage example →

Learning to Rank and Filter - community question answering

amazon.sciencemachine learningnatural language processing

This dataset provides product related questions and answers, including answers' quality labels, as as part of the paper 'IR Evaluation and Learning in the Presence of Forbidden Documents'.

Details →

Usage examples

See 1 usage example →

Multi Token Completion

amazon.sciencemachine learningnatural language processing

This dataset provides masked sentences and multi-token phrases that were masked-out of these sentences. We offer 3 datasets: a general purpose dataset extracted from the Wikipedia and Books corpora, and 2 additional datasets extracted from pubmed abstracts. As for the pubmed data, please be aware that the dataset does not reflect the most current/accurate data available from NLM (it is not being updated). For these datasets, the columns provided for each datapoint are as follows: text- the original sentence span- the span (phrase) which is masked out span_lower- the lowercase version of span r...

Details →

Usage examples

See 1 usage example →

Multilingual Name Entity Recognition (NER) Datasets with Gazetteer

amazon.sciencenatural language processing

Name Entity Recognition datasets containing short sentences and queries with low-context, including LOWNER, MSQ-NER, ORCAS-NER and Gazetteers (1.67 million entities). This release contains the multilingual versions of the datasets in Low Context Name Entity Recognition (NER) Datasets with Gazetteer.

Details →

Usage examples

See 1 usage example →

PASS: Perturb-and-Select Summarizer for Product Reviews

amazon.sciencenatural language processingtext analysis

A collection of product reviews summaries automatically generated by PASS for 32 Amazon products from the FewSum dataset

Details →

Usage examples

See 1 usage example →

Phrase Clustering Dataset (PCD)

amazon.sciencejsonnatural language processing

This dataset is part of the paper "McPhraSy: Multi-Context Phrase Similarity and Clustering" by DN Cohen et al (2022). The purpose of PCD is to evaluate the quality of semantic-based clustering of noun phrases. The phrases were collected from the [Amazon Review Dataset] (https://nijianmo.github.io/amazon/).

Details →

Usage examples

See 1 usage example →

Pre- and post-purchase product questions

amazon.sciencemachine learningnatural language processing

This dataset provides product related questions, including their textual content and gap, in hours, between purchase and posting time. Each question is also associated with related product details, including its id and title.

Details →

Usage examples

See 1 usage example →

Product Comparison Dataset for Online Shopping

amazon.sciencemachine learningnatural language processingonline shoppingproduct comparison

The Product Comparison dataset for online shopping is a new, manually annotated dataset with about 15K human generated sentences, which compare related products based on one or more of their attributes (the first such data we know of for product comparison). It covers ∼8K product sets, their selected attributes, and comparison texts.

Details →

Usage examples

See 1 usage example →

Shopping Humor Generation

amazon.sciencecommercenatural language processing

This dataset provides a set of non-shoppable items, which are items that can't be purchased via a virtual assistant (love, vampires, etc). In addition, for each non-shoppable item, the dataset contains humorous responses generated by two different large language models, and a template-based generation solution that uses a commonsense knowledge graph called ConceptNet. Finally, each row contains a score provided by human annotators that judge how funny each response is. The columns provided for each datapoint are as follows: question- purchase request of the non-shoppable item answer- a ge...

Details →

Usage examples

See 1 usage example →

WikiSum: Coherent Summarization Dataset for Efficient Human-Evaluation

amazon.sciencemachine learningnatural language processing

This dataset provides how-to articles from wikihow.com and their summaries, written as a coherent paragraph. The dataset itself is available at wikisum.zip, and contains the article, the summary, the wikihow url, and an official fold (train, val, or test). In addition, human evaluation results are available at wikisum-human-eval...

Details →

Usage examples

See 1 usage example →

Wizard of Tasks

amazon.scienceconversation datadialogmachine learningnatural language processing

Wizard of Tasks (WoT) is a dataset containing conversations for Conversational Task Assistants (CTAs). A CTA is a conversational agent whose goal is to help humans to perform real-world tasks. A CTA can help in exploring available tasks, answering task-specific questions and guiding users through step-by-step instructions. WoT contains about 550 conversations with ~18,000 utterances in two domains, i.e., Cooking and Home Improvement.

Details →

Usage examples

See 1 usage example →

Google Books Ngrams

amazon.sciencenatural language processing

N-grams are fixed size tuples of items. In this case the items are words extracted from the Google Books corpus. The n specifies the number of elements in the tuple, so a 5-gram contains five words or characters. The n-grams in this dataset were produced by passing a sliding window of the text of books and outputting a record for each new token.

Details →