The Common Crawl corpus contains petabytes of data collected over 12 years of web crawling. The corpus contains raw web page data, metadata extracts and text extracts. Common Crawl data is stored on Amazon Web Services’ Public Data Sets and on multiple academic cloud platforms across the world.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
This dataset supplements publication "Multilingual Scraper of Privacy Policies and Terms of Service" at ACM CSLAW’25, March 25–27, 2025, München, Germany. It includes the first 12 months of scraped policies and terms from about 800k websites, see concrete numbers below.
The following table lists the amount of websites visited per month:
Month | Number of websites |
---|---|
2024-01 | 551'148 |
2024-02 | 792'921 |
2024-03 | 844'537 |
2024-04 | 802'169 |
2024-05 | 805'878 |
2024-06 | 809'518 |
2024-07 | 811'418 |
2024-08 | 813'534 |
2024-09 | 814'321 |
2024-10 | 817'586 |
2024-11 | 828'662 |
2024-12 | 827'101 |
The amount of websites visited should always be higher than the number of jobs (Table 1 of the paper) as a website may redirect, resulting in two websites scraped or it has to be retried.
To simplify the access, we release the data in large CSVs. Namely, there is one file for policies and another for terms per month. All of these files contain all metadata that are usable for the analysis. If your favourite CSV parser reports the same numbers as above then our dataset is correctly parsed. We use ‘,’ as a separator, the first row is the heading and strings are in quotes.
Since our scraper sometimes collects other documents than policies and terms (for how often this happens, see the evaluation in Sec. 4 of the publication) that might contain personal data such as addresses of authors of websites that they maintain only for a selected audience. We therefore decided to reduce the risks for websites by anonymizing the data using Presidio. Presidio substitutes personal data with tokens. If your personal data has not been effectively anonymized from the database and you wish for it to be deleted, please contact us.
The uncompressed dataset is about 125 GB in size, so you will need sufficient storage. This also means that you likely cannot process all the data at once in your memory, so we split the data in months and in files for policies and terms.
The files have the following names:
Both files contain the following metadata columns:
website_month_id
- identification of crawled websitejob_id
- one website can have multiple jobs in case of redirects (but most commonly has only one)website_index_status
- network state of loading the index page. This is resolved by the Chromed DevTools Protocol.
DNS_ERROR
- domain cannot be resolvedOK
- all fineREDIRECT
- domain redirect to somewhere elseTIMEOUT
- the request timed outBAD_CONTENT_TYPE
- 415 Unsupported Media TypeHTTP_ERROR
- 404 errorTCP_ERROR
- error in the network connectionUNKNOWN_ERROR
- unknown errorwebsite_lang
- language of index page detected based on langdetect
librarywebsite_url
- the URL of the website sampled from the CrUX list (may contain subdomains, etc). Use this as a unique identifier for connecting data between months.job_domain_status
- indicates the status of loading the index page. Can be:
OK
- all works well (at the moment, should be all entries)BLACKLISTED
- URL is on our list of blocked URLsUNSAFE
- website is not safe according to save browsing API by GoogleLOCATION_BLOCKED
- country is in the list of blocked countriesjob_started_at
- when the visit of the website was startedjob_ended_at
- when the visit of the website was endedjob_crux_popularity
- JSON with all popularity ranks of the website this monthjob_index_redirect
- when we detect that the domain redirects us, we stop the crawl and create a new job with the target URL. This saves time if many websites redirect to one target, as it will be crawled only once. The index_redirect
is then the job.id
corresponding to the redirect target.job_num_starts
- amount of crawlers that started this job (counts restarts in case of unsuccessful crawl, max is 3)job_from_static
- whether this job was included in the static selection (see Sec. 3.3 of the paper)job_from_dynamic
- whether this job was included in the dynamic selection (see Sec. 3.3 of the paper) - this is not exclusive with from_static
- both can be true when the lists overlap.job_crawl_name
- our name of the crawl, contains year and month (e.g., 'regular-2024-12' for regular crawls, in Dec 2024)policy_url_id
- ID of the URL this policy haspolicy_keyword_score
- score (higher is better) according to the crawler's keywords list that given document is a policypolicy_ml_probability
- probability assigned by the BERT model that given document is a policypolicy_consideration_basis
- on which basis we decided that this url is policy. The following three options are executed by the crawler in this order:
policy_url
- full URL to the policypolicy_content_hash
- used as identifier - if the document remained the same between crawls, it won't create a new entrypolicy_content
- contains the text of policies and terms extracted to Markdown using Mozilla's readability
librarypolicy_lang
- Language detected by fasttext of the contentAnalogous to policy data, just substitute policy
to terms
.
Check this Google Docs for an updated version of this README.md.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global crawler camera market size was valued at USD 1.5 billion in 2023 and is projected to reach USD 3.2 billion by 2032, growing at a CAGR of 8.2% during the forecast period. The significant growth factors contributing to this remarkable market expansion include urbanization, increasing infrastructural development, and a rising need for efficient inspection systems across various sectors such as municipal, industrial, residential, and commercial applications.
One of the primary growth drivers of the crawler camera market is the escalating use of sewer and pipeline inspection systems. With increasing urbanization, the need for effective wastewater management systems has risen exponentially. Crawler cameras are essential in identifying blockages, leaks, and damages within sewer and pipeline systems, making them indispensable in municipal operations. Moreover, stringent regulations regarding environmental safety and the maintenance of public infrastructure further bolster the demand for advanced inspection solutions, thereby driving market growth.
Another significant growth factor is the technological advancements in crawler camera systems. The integration of high-definition cameras, sensors, and data analytics in crawler systems enhances their functionality and efficiency. These technological improvements allow for better visualization, accurate diagnostics, and more reliable data collection. As industries strive for improved operational efficiency and preventive maintenance, the adoption of advanced crawler camera systems is set to increase, fostering market growth.
The expanding industrial sector also contributes to the growing demand for crawler cameras. Industries such as oil and gas, manufacturing, and chemical processing require regular inspection of their pipelines and storage tanks. Crawler cameras offer a non-invasive and effective solution for monitoring the condition of these infrastructures, ensuring safety, compliance, and operational efficiency. The rising investments in industrial infrastructure and the emphasis on preventive maintenance are likely to further propel the crawler camera market.
From a regional perspective, North America holds a significant share of the crawler camera market, driven by robust infrastructure development, stringent regulatory standards, and high adoption of advanced technologies. Meanwhile, the Asia Pacific region is expected to witness the fastest growth during the forecast period, attributed to rapid urbanization, increasing industrial activities, and growing investments in infrastructure projects. Europe, Latin America, and the Middle East & Africa also present substantial growth opportunities, driven by ongoing infrastructural developments and increasing awareness regarding the benefits of crawler camera systems.
The crawler camera market by product type is segmented into push cameras, crawler cameras, and pole cameras. Push cameras are predominantly used for shorter, smaller diameter pipes and tend to be more affordable and accessible. Their compact size and ease of use make them ideal for residential and small-scale commercial applications. With technological advancements, push cameras now come with enhanced features like high-resolution imaging and robust maneuverability, increasing their adoption in various end-users.
Crawler cameras, on the other hand, are designed for more extensive and complex inspection tasks. These devices are typically used for larger diameter pipelines and longer inspection runs. They offer superior mobility and often come with features such as adjustable tracks, pan-and-tilt camera heads, and integrated lighting, making them highly effective in industrial and municipal applications. This segment is expected to witness substantial growth due to the increasing need for advanced inspection solutions in large-scale infrastructure projects.
Pole cameras are used for above-ground inspections and are particularly useful in situations where access is limited or when a quick visual assessment is required. These cameras are commonly used in tank inspections, elevated pipe systems, and other applications where maneuverability is essential. The pole camera segment is experiencing growth due to its versatility and the increasing demand for efficient, non-invasive inspection tools in various industries.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Greek-English parallel corpus MaCoCu-el-en 1.0 was built by crawling the “.gr", ".ελ", ".cy" and ".eu" internet top-level domain in 2023, extending the crawl dynamically to other domains as well.
All the crawling process was carried out by the MaCoCu crawler (https://github.com/macocu/MaCoCu-crawler). Websites containing documents in both target languages were identified and processed using the tool Bitextor (https://github.com/bitextor/bitextor). Considerable effort was devoted into cleaning the extracted text to provide a high-quality parallel corpus. This was achieved by removing boilerplate and near-duplicated paragraphs and documents that are not in one of the targeted languages. Document and segment alignment as implemented in Bitextor were carried out, and Bifixer (https://github.com/bitextor/bifixer) and BicleanerAI (https://github.com/bitextor/bicleaner-ai) were used for fixing, cleaning, and deduplicating the final version of the corpus.
The corpus is available in three formats: two sentence-level formats, TXT and TMX, and a document-level TXT format. TMX is an XML-based format and TXT is a tab-separated format. They both consist of pairs of source and target segments (one or several sentences) and additional metadata. The following metadata is included in both sentence-level formats: - source and target document URL; - paragraph ID which includes information on the position of the sentence in the paragraph and in the document (e.g., “p35:77s1/3” which means “paragraph 35 out of 77, sentence 1 out of 3”); - quality score as provided by the tool Bicleaner AI (a likelihood of a pair of sentences being mutual translations, provided with a score between 0 and 1); - similarity score as provided by the sentence alignment tool Bleualign (value between 0 and 1); - personal information identification (“biroamer-entities-detected”): segments containing personal information are flagged, so final users of the corpus can decide whether to use these segments; - translation direction and machine translation identification (“translation-direction”): the source segment in each segment pair was identified by using a probabilistic model (https://github.com/RikVN/TranslationDirection), which also determines if the translation has been produced by a machine-translation system; - a DSI class (“dsi”): information whether the segment is connected to any of Digital Service Infrastructure (DSI) classes (e.g., cybersecurity, e-health, e-justice, open-data-portal), defined by the Connecting Europe Facility (https://github.com/RikVN/DSI); - English language variant: the language variant of English (British or American, using a lexicon-based English variety classifier - https://pypi.org/project/abclf/) was identified on document and domain level.
Furthermore, the sentence-level TXT format provides additional metadata: - web domain of the text; - source and target document title; - the date when the original file was retrieved; - the original type of the file (e.g., “html”), from which the sentence was extracted; - paragraph quality (labels, such as “short” or “good”, assigned based on paragraph length, URL and stopword density via the jusText tool - https://corpus.tools/wiki/Justext); - information whether the sentence is a heading or not in the original document.
The document-level TXT format provides pairs of documents identified to contain parallel data. In addition to the parallel documents (in base64 format), the corpus includes the following metadata: source and target document URL, a DSI category and the English language variant (British or American).
Notice and take down: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: (1) Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. (2) Clearly identify the copyrighted work claimed to be infringed. (3) Clearly identify the material that is claimed to be infringing and information reasonably sufficient in order to allow us to locate the material. (4) Please write to the contact person for this resource whose email is available in the full item record. We will comply with legitimate requests by removing the affected sources from the next release of the corpus.
This action has received funding from the European Union's Connecting Europe Facility 2014-2020 - CEF Telecom, under Grant Agreement No. INEA/CEF/ICT/A2020/2278341. This communication reflects only the author’s view. The Agency is not responsible for any use that may be made of the information it contains.
https://choosealicense.com/licenses/unknown/https://choosealicense.com/licenses/unknown/
Dataset Card for "wmt19"
Dataset Summary
Warning: There are issues with the Common Crawl corpus data (training-parallel-commoncrawl.tgz):
Non-English files contain many English sentences.
Their "parallel" sentences in English are not aligned: they are uncorrelated with their counterpart.
We have contacted the WMT organizers, and in response, they have indicated that they do not have plans to update the Common Crawl corpus data. Their rationale pertains… See the full description on the dataset page: https://huggingface.co/datasets/wmt/wmt19.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
rxivist.org allowed readers to sort and filter the tens of thousands of preprints posted to bioRxiv and medRxiv. Rxivist used a custom web crawler to index all papers posted to those two websites; this is a snapshot of Rxivist the production database. The version number indicates the date on which the snapshot was taken. See the included "README.md" file for instructions on how to use the "rxivist.backup" file to import data into a PostgreSQL database server.
Please note this is a different repository than the one used for the Rxivist manuscript—that is in a separate Zenodo repository. You're welcome (and encouraged!) to use this data in your research, but please cite our paper, now published in eLife.
Previous versions are also available pre-loaded into Docker images, available at blekhmanlab/rxivist_data.
Version notes:
2023-03-01
The final Rxivist data upload, more than four years after the first and encompassing 223,541 preprints posted to bioRxiv and medRxiv through the end of February 2023.
2020-12-07***
In addition to bioRxiv preprints, the database now includes all medRxiv preprints as well.
The website where a preprint was posted is now recorded in a new field in the "articles" table, called "repo".
We've significantly refactored the web crawler to take advantage of developments with the bioRxiv API.
The main difference is that preprints flagged as "published" by bioRxiv are no longer recorded on the same schedule that download metrics are updated: The Rxivist database should now record published DOI entries the same day bioRxiv detects them.
Twitter metrics have returned, for the most part. Improvements with the Crossref Event Data API mean we can once again tally daily Twitter counts for all bioRxiv DOIs.
The "crossref_daily" table remains where these are recorded, and daily numbers are now up to date.
Historical daily counts have also been re-crawled to fill in the empty space that started in October 2019.
There are still several gaps that are more than a week long due to missing data from Crossref.
We have recorded available Crossref Twitter data for all papers with DOI numbers starting with "10.1101," which includes all medRxiv preprints. However, there appears to be almost no Twitter data available for medRxiv preprints.
The download metrics for article id 72514 (DOI 10.1101/2020.01.30.927871) were found to be out of date for February 2020 and are now correct. This is notable because article 72514 is the most downloaded preprint of all time; we're still looking into why this wasn't updated after the month ended.
2020-11-18
Publication checks should be back on schedule.
2020-10-26
This snapshot fixes most of the data issues found in the previous version. Indexed papers are now up to date, and download metrics are back on schedule. The check for publication status remains behind schedule, however, and the database may not include published DOIs for papers that have been flagged on bioRxiv as "published" over the last two months. Another snapshot will be posted in the next few weeks with updated publication information.
2020-09-15
A crawler error caused this snapshot to exclude all papers posted after about August 29, with some papers having download metrics that were more out of date than usual. The "last_crawled" field is accurate.
2020-09-08
This snapshot is misconfigured and will not work without modification; it has been replaced with version 2020-09-15.
2019-12-27
Several dozen papers did not have dates associated with them; that has been fixed.
Some authors have had two entries in the "authors" table for portions of 2019, one profile that was linked to their ORCID and one that was not, occasionally with almost identical "name" strings. This happened after bioRxiv began changing author names to reflect the names in the PDFs, rather than the ones manually entered into their system. These database records are mostly consolidated now, but some may remain.
2019-11-29
The Crossref Event Data API remains down; Twitter data is unavailable for dates after early October.
2019-10-31
The Crossref Event Data API is still experiencing problems; the Twitter data for October is incomplete in this snapshot.
The README file has been modified to reflect changes in the process for creating your own DB snapshots if using the newly released PostgreSQL 12.
2019-10-01
The Crossref API is back online, and the "crossref_daily" table should now include up-to-date tweet information for July through September.
About 40,000 authors were removed from the author table because the name had been removed from all preprints they had previously been associated with, likely because their name changed slightly on the bioRxiv website ("John Smith" to "J Smith" or "John M Smith"). The "author_emails" table was also modified to remove entries referring to the deleted authors. The web crawler is being updated to clean these orphaned entries more frequently.
2019-08-30
The Crossref Event Data API, which provides the data used to populate the table of tweet counts, has not been fully functional since early July. While we are optimistic that accurate tweet counts will be available at some point, the sparse values currently in the "crossref_daily" table for July and August should not be considered reliable.
2019-07-01
A new "institution" field has been added to the "article_authors" table that stores each author's institutional affiliation as listed on that paper. The "authors" table still has each author's most recently observed institution.
We began collecting this data in the middle of May, but it has not been applied to older papers yet.
2019-05-11
The README was updated to correct a link to the Docker repository used for the pre-built images.
2019-03-21
The license for this dataset has been changed to CC-BY, which allows use for any purpose and requires only attribution.
A new table, "publication_dates," has been added and will be continually updated. This table will include an entry for each preprint that has been published externally for which we can determine a date of publication, based on data from Crossref. (This table was previously included in the "paper" schema but was not updated after early December 2018.)
Foreign key constraints have been added to almost every table in the database. This should not impact any read behavior, but anyone writing to these tables will encounter constraints on existing fields that refer to other tables. Most frequently, this means the "article" field in a table will need to refer to an ID that actually exists in the "articles" table.
The "author_translations" table has been removed. This was used to redirect incoming requests for outdated author profile pages and was likely not of any functional use to others.
The "README.md" file has been renamed "1README.md" because Zenodo only displays a preview for the file that appears first in the list alphabetically.
The "article_ranks" and "article_ranks_working" tables have been removed as well; they were unused.
2019-02-13.1
After consultation with bioRxiv, the "fulltext" table will not be included in further snapshots until (and if) concerns about licensing and copyright can be resolved.
The "docker-compose.yml" file was added, with corresponding instructions in the README to streamline deployment of a local copy of this database.
2019-02-13
The redundant "paper" schema has been removed.
BioRxiv has begun making the full text of preprints available online. Beginning with this version, a new table ("fulltext") is available that contains the text of preprints that have been processed already. The format in which this information is stored may change in the future; any digression will be noted here.
This is the first version that has a corresponding Docker image.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global sewer crawler cameras market size is projected to witness substantial growth, reaching approximately USD 700 million in 2023 and anticipated to expand to USD 1.05 billion by 2032, exhibiting a compound annual growth rate (CAGR) of around 4.5% during the forecast period. This growth is primarily driven by the increasing demand for efficient sewer inspection solutions, facilitated by the rising urbanization and industrialization across the globe which necessitates effective infrastructure maintenance and management. The need for regular inspection and maintenance of underground sewer systems to prevent blockages, leaks, and other operational challenges is a significant growth factor for this market.
One of the primary growth factors in the sewer crawler cameras market is the increasing emphasis on smart infrastructure development. Cities worldwide are investing heavily in upgrading their infrastructure to become smarter and more efficient. Smart sewer inspection technologies, such as crawler cameras, play a crucial role in identifying issues before they become major problems, helping cities save on repair costs and improve public health and safety. Additionally, government regulations mandating regular inspections of sewer systems to ensure environmental safety contribute significantly to the market's growth. The integration of advanced technologies like AI and IoT in sewer inspection processes is anticipated to further bolster market demand, enabling more precise and efficient inspections.
Technological advancements in the field of sewer inspection are another driving force behind the growth of this market. The advent of high-resolution cameras, robust crawler designs, and advanced data capturing and processing capabilities have significantly enhanced the efficiency of sewer inspections. These innovations allow for detailed analysis and real-time monitoring of sewer systems, which is critical for timely maintenance and repair works. Furthermore, the growing trend of automation in sewer management systems aids in reducing human intervention, thereby minimizing errors and enhancing operational efficiency. This trend is particularly appealing to municipal and industrial sectors, which require consistent and reliable sewer inspection solutions.
The rise in urban population and subsequent increase in construction activities globally also contribute to the market's growth. Rapid urbanization leads to the expansion of urban sewer networks, necessitating regular and thorough inspections to ensure seamless operations. The construction of new residential, commercial, and industrial facilities demands efficient sewer systems, thus fueling demand for crawler camera inspections. Moreover, as more countries embark on infrastructure development projects, the requirement for efficient sewer inspection solutions is expected to surge, propelling the market forward. The need for maintaining stringent safety standards in water and wastewater management systems also underscores the importance of regular sewer inspections, supporting market expansion.
CCTV Inspection Cameras have become an integral component of modern sewer inspection systems, offering unparalleled precision and reliability. These cameras are equipped with high-definition imaging capabilities that allow for detailed examination of sewer pipes, identifying potential issues such as blockages, cracks, and leaks. The use of CCTV technology in sewer inspections not only enhances the accuracy of the inspections but also reduces the time and labor involved, making it a cost-effective solution for municipalities and industrial sectors. As urban areas continue to expand, the demand for efficient and reliable sewer inspection solutions like CCTV Inspection Cameras is expected to grow, supporting the overall market expansion.
Regionally, North America holds a significant share of the sewer crawler cameras market, driven by its advanced infrastructure and stringent regulatory standards for sewer inspection and maintenance. The presence of key market players and high adoption rates of modern sewer inspection technologies contribute to the region's dominance. Europe follows closely, with substantial investments in smart city projects and infrastructure development. In the Asia Pacific region, rapid urbanization and industrialization are expected to drive market growth, while the Middle East & Africa and Latin America regions are anticipated to witness moderate growth due to increasing investments in infrastructure developmen
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Serbian-English parallel corpus MaCoCu-sr-en 1.0 was built by crawling the “.rs” and “.срб” internet top-level domains in 2021 and 2022, extending the crawl dynamically to other domains as well.
All the crawling process was carried out by the MaCoCu crawler (https://github.com/macocu/MaCoCu-crawler). Websites containing documents in both target languages were identified and processed using the tool Bitextor (https://github.com/bitextor/bitextor). Considerable effort was devoted into cleaning the extracted text to provide a high-quality parallel corpus. This was achieved by removing boilerplate and near-duplicated paragraphs and documents that are not in one of the targeted languages. Document and segment alignment as implemented in Bitextor were carried out, and Bifixer (https://github.com/bitextor/bifixer) and BicleanerAI (https://github.com/bitextor/bicleaner-ai) were used for fixing, cleaning, and deduplicating the final version of the corpus.
The corpus is available in three formats: two sentence-level formats, TXT and TMX, and a document-level TXT format. In each format, the texts are separated based on the script into two files: a Latin and a Cyrillic subcorpus. TMX is an XML-based format and TXT is a tab-separated format. They both consist of pairs of source and target segments (one or several sentences) and additional metadata. The following metadata is included in both sentence-level formats: - source and target document URL; - paragraph ID which includes information on the position of the sentence in the paragraph and in the document (e.g., “p35:77s1/3” which means “paragraph 35 out of 77, sentence 1 out of 3”); - quality score as provided by the tool Bicleaner AI (a likelihood of a pair of sentences being mutual translations, provided with a score between 0 and 1); - similarity score as provided by the sentence alignment tool Bleualign (value between 0 and 1); - personal information identification (“biroamer-entities-detected”): segments containing personal information are flagged, so final users of the corpus can decide whether to use these segments; - translation direction and machine translation identification (“translation-direction”): the source segment in each segment pair was identified by using a probabilistic model (https://github.com/RikVN/TranslationDirection), which also determines if the translation has been produced by a machine-translation system; - a DSI class (“dsi”): information whether the segment is connected to any of Digital Service Infrastructure (DSI) classes (e.g., cybersecurity, e-health, e-justice, open-data-portal), defined by the Connecting Europe Facility (https://github.com/RikVN/DSI); - English language variant: the language variant of English (British or American, using a lexicon-based English variety classifier - https://pypi.org/project/abclf/) was identified on document and domain level.
Furthermore, the sentence-level TXT format provides additional metadata: - web domain of the text; - source and target document title; - the date when the original file was retrieved; - the original type of the file (e.g., “html”), from which the sentence was extracted; - paragraph quality (labels, such as “short” or “good”, assigned based on paragraph length, URL and stopword density via the jusText tool - https://corpus.tools/wiki/Justext); - information whether the sentence is a heading or not in the original document.
The document-level TXT format provides pairs of documents identified to contain parallel data. In addition to the parallel documents (in base64 format), the corpus includes the following metadata: source and target document URL, a DSI category and the English language variant (British or American).
Notice and take down: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: (1) Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. (2) Clearly identify the copyrighted work claimed to be infringed. (3) Clearly identify the material that is claimed to be infringing and information reasonably sufficient in order to allow us to locate the material. (4) Please write to the contact person for this resource whose email is available in the full item record. We will comply with legitimate requests by removing the affected sources from the next release of the corpus.
This action has received funding from the European Union's Connecting Europe Facility 2014-2020 - CEF Telecom, under Grant Agreement No. INEA/CEF/ICT/A2020/2278341. This communication reflects only the author’s view. The Agency is not responsible for any use that may be made of the information it contains.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Turkish-English parallel corpus MaCoCu-tr-en 2.0 was built by crawling the “.tr” and “.cy” internet top-level domains in 2021, extending the crawl dynamically to other domains as well.
All the crawling process was carried out by the MaCoCu crawler (https://github.com/macocu/MaCoCu-crawler). Websites containing documents in both target languages were identified and processed using the tool Bitextor (https://github.com/bitextor/bitextor). Considerable effort was devoted into cleaning the extracted text to provide a high-quality parallel corpus. This was achieved by removing boilerplate and near-duplicated paragraphs and documents that are not in one of the targeted languages. Document and segment alignment as implemented in Bitextor were carried out, and Bifixer (https://github.com/bitextor/bifixer) and BicleanerAI (https://github.com/bitextor/bicleaner-ai) were used for fixing, cleaning, and deduplicating the final version of the corpus.
The corpus is available in three formats: two sentence-level formats, TXT and TMX, and a document-level TXT format. TMX is an XML-based format and TXT is a tab-separated format. They both consist of pairs of source and target segments (one or several sentences) and additional metadata. The following metadata is included in both sentence-level formats: - source and target document URL; - paragraph ID which includes information on the position of the sentence in the paragraph and in the document (e.g., “p35:77s1/3” which means “paragraph 35 out of 77, sentence 1 out of 3”); - quality score as provided by the tool Bicleaner AI (a likelihood of a pair of sentences being mutual translations, provided with a score between 0 and 1); - similarity score as provided by the sentence alignment tool Bleualign (value between 0 and 1); - personal information identification (“biroamer-entities-detected”): segments containing personal information are flagged, so final users of the corpus can decide whether to use these segments; - translation direction and machine translation identification (“translation-direction”): the source segment in each segment pair was identified by using a probabilistic model (https://github.com/RikVN/TranslationDirection), which also determines if the translation has been produced by a machine-translation system; - a DSI class (“dsi”): information whether the segment is connected to any of Digital Service Infrastructure (DSI) classes (e.g., cybersecurity, e-health, e-justice, open-data-portal), defined by the Connecting Europe Facility (https://github.com/RikVN/DSI); - English language variant: the language variant of English (British or American, using a lexicon-based English variety classifier - https://pypi.org/project/abclf/) was identified on document and domain level.
Furthermore, the sentence-level TXT format provides additional metadata: - web domain of the text; - source and target document title; - the date when the original file was retrieved; - the original type of the file (e.g., “html”), from which the sentence was extracted; - paragraph quality (labels, such as “short” or “good”, assigned based on paragraph length, URL and stopword density via the jusText tool - https://corpus.tools/wiki/Justext); - information whether the sentence is a heading or not in the original document.
The document-level TXT format provides pairs of documents identified to contain parallel data. In addition to the parallel documents (in base64 format), the corpus includes the following metadata: source and target document URL, a DSI category and the English language variant (British or American).
As opposed to the previous version, this version has more accurate metadata on languages of the texts, which was achieved by using Google's Compact Language Detector 2 (CLD2) (https://github.com/CLD2Owners/cld2), a high-performance language detector supporting many languages. Other tools, used for web corpora creation and curation, have been updated as well, resulting in an even cleaner corpus. The new version also provides additional metadata, such as the position of the sentence in the paragraph and document, and information whether the sentence is related to a DSI. Moreover, the corpus is now also provided in a document-level format.
Notice and take down: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: (1) Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. (2) Clearly identify the copyrighted work claimed to be infringed. (3) Clearly identify the material that is claimed to be infringing and information reasonably sufficient in order to allow us to locate the material. (4) Please write to the contact person for this resource whose email is available in the full item record. We will comply with legitimate requests by removing the affected sources from the next release of the corpus.
This action has received funding from the European Union's Connecting Europe Facility 2014-2020 - CEF Telecom, under Grant Agreement No. INEA/CEF/ICT/A2020/2278341. This communication reflects only the author’s view. The Agency is not responsible for any use that may be made of the information it contains.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Ukrainian-English parallel corpus MaCoCu-uk-en 1.0 was built by crawling the ".ua" and ".укр" internet top-level domain in 2022, extending the crawl dynamically to other domains as well.
All the crawling process was carried out by the MaCoCu crawler (https://github.com/macocu/MaCoCu-crawler). Websites containing documents in both target languages were identified and processed using the tool Bitextor (https://github.com/bitextor/bitextor). Considerable effort was devoted into cleaning the extracted text to provide a high-quality parallel corpus. This was achieved by removing boilerplate and near-duplicated paragraphs and documents that are not in one of the targeted languages. Document and segment alignment as implemented in Bitextor were carried out, and Bifixer (https://github.com/bitextor/bifixer) and BicleanerAI (https://github.com/bitextor/bicleaner-ai) were used for fixing, cleaning, and deduplicating the final version of the corpus.
The corpus is available in three formats: two sentence-level formats, TXT and TMX, and a document-level TXT format. TMX is an XML-based format and TXT is a tab-separated format. They both consist of pairs of source and target segments (one or several sentences) and additional metadata. The following metadata is included in both sentence-level formats: - source and target document URL; - paragraph ID which includes information on the position of the sentence in the paragraph and in the document (e.g., “p35:77s1/3” which means “paragraph 35 out of 77, sentence 1 out of 3”); - quality score as provided by the tool Bicleaner AI (a likelihood of a pair of sentences being mutual translations, provided with a score between 0 and 1); - similarity score as provided by the sentence alignment tool Bleualign (value between 0 and 1); - personal information identification (“biroamer-entities-detected”): segments containing personal information are flagged, so final users of the corpus can decide whether to use these segments; - translation direction and machine translation identification (“translation-direction”): the source segment in each segment pair was identified by using a probabilistic model (https://github.com/RikVN/TranslationDirection), which also determines if the translation has been produced by a machine-translation system; - a DSI class (“dsi”): information whether the segment is connected to any of Digital Service Infrastructure (DSI) classes (e.g., cybersecurity, e-health, e-justice, open-data-portal), defined by the Connecting Europe Facility (https://github.com/RikVN/DSI); - English language variant: the language variant of English (British or American, using a lexicon-based English variety classifier - https://pypi.org/project/abclf/) was identified on document and domain level.
Furthermore, the sentence-level TXT format provides additional metadata: - web domain of the text; - source and target document title; - the date when the original file was retrieved; - the original type of the file (e.g., “html”), from which the sentence was extracted; - paragraph quality (labels, such as “short” or “good”, assigned based on paragraph length, URL and stopword density via the jusText tool - https://corpus.tools/wiki/Justext); - information whether the sentence is a heading or not in the original document.
The document-level TXT format provides pairs of documents identified to contain parallel data. In addition to the parallel documents (in base64 format), the corpus includes the following metadata: source and target document URL, a DSI category and the English language variant (British or American).
Notice and take down: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: (1) Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. (2) Clearly identify the copyrighted work claimed to be infringed. (3) Clearly identify the material that is claimed to be infringing and information reasonably sufficient in order to allow us to locate the material. (4) Please write to the contact person for this resource whose email is available in the full item record. We will comply with legitimate requests by removing the affected sources from the next release of the corpus.
This action has received funding from the European Union's Connecting Europe Facility 2014-2020 - CEF Telecom, under Grant Agreement No. INEA/CEF/ICT/A2020/2278341. This communication reflects only the author’s view. The Agency is not responsible for any use that may be made of the information it contains.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
The Common Crawl corpus contains petabytes of data collected over 12 years of web crawling. The corpus contains raw web page data, metadata extracts and text extracts. Common Crawl data is stored on Amazon Web Services’ Public Data Sets and on multiple academic cloud platforms across the world.