Facebook
TwitterBy Homeland Infrastructure Foundation [source]
Within this dataset, users can find numerous attributes that provide insight into various aspects of shoreline construction lines. The Category_o field categorizes these structures based on certain characteristics or purposes they serve. Additionally, each object in the dataset possesses a unique name or identifier represented by the Object_Nam column.
Another crucial piece of information captured in this dataset is the status of each shoreline construction line. The Status field indicates whether a particular structure is currently active or inactive. This helps users understand if it still serves its intended purpose or has been decommissioned.
Furthermore, the dataset includes data pertaining to multiple water levels associated with different shoreline construction lines. This information can be found in the Water_Leve column and provides relevant context for understanding how these artificial coastlines interact with various water bodies.
To aid cartographic representations and proper utilization of this data source for mapping purposes at different scales, there is also an attribute called Scale_Mini. This value denotes the minimum scale necessary to visualize a specific shoreline construction line accurately.
Data sources are important for reproducibility and quality assurance purposes in any GIS analysis project; hence identifying who provided and contributed to collecting this data can be critical in assessing its reliability. In this regard, individuals or organizations responsible for providing source data are specified in the column labeled Source_Ind.
Accompanying descriptive information about each source used to create these shoreline constructions lines can be found in the Source_D_1 field. This supplemental information provides additional context and details about the data's origin or collection methodology.
The dataset also includes a numerical attribute called SHAPE_Leng, representing the length of each shoreline construction line. This information complements the geographic and spatial attributes associated with these structures.
Understanding the Categories:
- The Category_o column classifies each shoreline construction line into different categories. This can range from seawalls and breakwaters to jetties and groins.
- Use this information to identify specific types of shoreline constructions based on your analysis needs.
Identifying Specific Objects:
- The Object_Nam column provides unique names or identifiers for each shoreline construction line.
- These identifiers help differentiate between different segments of construction lines in a region.
Determining Status:
- The Status column indicates whether a shoreline construction line is active or inactive.
- Active constructions are still in use and may be actively maintained or monitored.
- Inactive constructions are no longer operational or may have been demolished.
Analyzing Water Levels:
- The Water_Leve column describes the water level at which each shoreline construction line is located.
- Different levels may impact the suitability or effectiveness of these structures based on tidal changes or flood zones.
Exploring Additional Information:
- The Informatio column contains additional details about each shoreline construction line.
- This can include various attributes such as materials used, design specifications, ownership details, etc.
Determining Minimum Visible Scale:
-- The Scale_Mini column specifies the minimum scale at which you can observe the coastline's man-made structures clearly.Verifying Data Sources: -- In order to understand data reliability and credibility for further analysis,Source_Ind, Source_D_1, SHAPE_Leng,and Source_Dat columns provide information about the individual or organization that provided the source data and length, and date of the source data used to create the shoreline construction lines.
Utilize this dataset to perform various analyses related to shorelines, coastal developments, navigational channels, and impacts of man-made structures on marine ecosystems. The combination of categories, object names, status, water levels, additional information, minimum visible scale and reliable source information offers a comprehensive understanding of shoreline constructions across different regions.
Remember to refer back to the dataset documentation for any specific deta...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Study information The sample included in this dataset represents five children who participated in a number line intervention study. Originally six children were included in the study, but one of them fulfilled the criterion for exclusion after missing several consecutive sessions. Thus, their data is not included in the dataset. All participants were currently attending Year 1 of primary school at an independent school in New South Wales, Australia. For children to be able to eligible to participate they had to present with low mathematics achievement by performing at or below the 25th percentile in the Maths Problem Solving and/or Numerical Operations subtests from the Wechsler Individual Achievement Test III (WIAT III A & NZ, Wechsler, 2016). Participants were excluded from participating if, as reported by their parents, they have any other diagnosed disorders such as attention deficit hyperactivity disorder, autism spectrum disorder, intellectual disability, developmental language disorder, cerebral palsy or uncorrected sensory disorders. The study followed a multiple baseline case series design, with a baseline phase, a treatment phase, and a post-treatment phase. The baseline phase varied between two and three measurement points, the treatment phase varied between four and seven measurement points, and all participants had 1 post-treatment measurement point. The number of measurement points were distributed across participants as follows: Participant 1 – 3 baseline, 6 treatment, 1 post-treatment Participant 3 – 2 baseline, 7 treatment, 1 post-treatment Participant 5 – 2 baseline, 5 treatment, 1 post-treatment Participant 6 – 3 baseline, 4 treatment, 1 post-treatment Participant 7 – 2 baseline, 5 treatment, 1 post-treatment In each session across all three phases children were assessed in their performance on a number line estimation task, a single-digit computation task, a multi-digit computation task, a dot comparison task and a number comparison task. Furthermore, during the treatment phase, all children completed the intervention task after these assessments. The order of the assessment tasks varied randomly between sessions.
Measures Number Line Estimation. Children completed a computerised bounded number line task (0-100). The number line is presented in the middle of the screen, and the target number is presented above the start point of the number line to avoid signalling the midpoint (Dackermann et al., 2018). Target numbers included two non-overlapping sets (trained and untrained) of 30 items each. Untrained items were assessed on all phases of the study. Trained items were assessed independent of the intervention during baseline and post-treatment phases, and performance on the intervention is used to index performance on the trained set during the treatment phase. Within each set, numbers were equally distributed throughout the number range, with three items within each ten (0-10, 11-20, 21-30, etc.). Target numbers were presented in random order. Participants did not receive performance-based feedback. Accuracy is indexed by percent absolute error (PAE) [(number estimated - target number)/ scale of number line] x100.
Single-Digit Computation. The task included ten additions with single-digit addends (1-9) and single-digit results (2-9). The order was counterbalanced so that half of the additions present the lowest addend first (e.g., 3 + 5) and half of the additions present the highest addend first (e.g., 6 + 3). This task also included ten subtractions with single-digit minuends (3-9), subtrahends (1-6) and differences (1-6). The items were presented horizontally on the screen accompanied by a sound and participants were required to give a verbal response. Participants did not receive performance-based feedback. Performance on this task was indexed by item-based accuracy.
Multi-digit computational estimation. The task included eight additions and eight subtractions presented with double-digit numbers and three response options. None of the response options represent the correct result. Participants were asked to select the option that was closest to the correct result. In half of the items the calculation involved two double-digit numbers, and in the other half one double and one single digit number. The distance between the correct response option and the exact result of the calculation was two for half of the trials and three for the other half. The calculation was presented vertically on the screen with the three options shown below. The calculations remained on the screen until participants responded by clicking on one of the options on the screen. Participants did not receive performance-based feedback. Performance on this task is measured by item-based accuracy.
Dot Comparison and Number Comparison. Both tasks included the same 20 items, which were presented twice, counterbalancing left and right presentation. Magnitudes to be compared were between 5 and 99, with four items for each of the following ratios: .91, .83, .77, .71, .67. Both quantities were presented horizontally side by side, and participants were instructed to press one of two keys (F or J), as quickly as possible, to indicate the largest one. Items were presented in random order and participants did not receive performance-based feedback. In the non-symbolic comparison task (dot comparison) the two sets of dots remained on the screen for a maximum of two seconds (to prevent counting). Overall area and convex hull for both sets of dots is kept constant following Guillaume et al. (2020). In the symbolic comparison task (Arabic numbers), the numbers remained on the screen until a response was given. Performance on both tasks was indexed by accuracy.
The Number Line Intervention During the intervention sessions, participants estimated the position of 30 Arabic numbers in a 0-100 bounded number line. As a form of feedback, within each item, the participants’ estimate remained visible, and the correct position of the target number appeared on the number line. When the estimate’s PAE was lower than 2.5, a message appeared on the screen that read “Excellent job”, when PAE was between 2.5 and 5 the message read “Well done, so close! and when PAE was higher than 5 the message read “Good try!” Numbers were presented in random order.
Variables in the dataset Age = age in ‘years, months’ at the start of the study Sex = female/male/non-binary or third gender/prefer not to say (as reported by parents) Math_Problem_Solving_raw = Raw score on the Math Problem Solving subtest from the WIAT III (WIAT III A & NZ, Wechsler, 2016). Math_Problem_Solving_Percentile = Percentile equivalent on the Math Problem Solving subtest from the WIAT III (WIAT III A & NZ, Wechsler, 2016). Num_Ops_Raw = Raw score on the Numerical Operations subtest from the WIAT III (WIAT III A & NZ, Wechsler, 2016). Math_Problem_Solving_Percentile = Percentile equivalent on the Numerical Operations subtest from the WIAT III (WIAT III A & NZ, Wechsler, 2016).
The remaining variables refer to participants’ performance on the study tasks. Each variable name is composed by three sections. The first one refers to the phase and session. For example, Base1 refers to the first measurement point of the baseline phase, Treat1 to the first measurement point on the treatment phase, and post1 to the first measurement point on the post-treatment phase.
The second part of the variable name refers to the task, as follows: DC = dot comparison SDC = single-digit computation NLE_UT = number line estimation (untrained set) NLE_T= number line estimation (trained set) CE = multidigit computational estimation NC = number comparison The final part of the variable name refers to the type of measure being used (i.e., acc = total correct responses and pae = percent absolute error).
Thus, variable Base2_NC_acc corresponds to accuracy on the number comparison task during the second measurement point of the baseline phase and Treat3_NLE_UT_pae refers to the percent absolute error on the untrained set of the number line task during the third session of the Treatment phase.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains all “license files” extracted from a snapshot of the Software Heritage archive taken on 2022-04-25. (Other, possibly more recent, versions of the datasets can be found at https://annex.softwareheritage.org/public/dataset/license-blobs/).
In this context, a license file is a unique file content (or “blob”) that appeared in a software origin archived by Software Heritage as a file whose name is often used to ship licenses in software projects. Some name examples are: COPYING, LICENSE, NOTICE, COPYRIGHT, etc. The exact file name pattern used to select the blobs contained in the dataset can be found in the SQL query file 01-select-blobs.sql. Note that the file name was not expected to be at the project root, because project subdirectories can contain different licenses than the top-level one, and we wanted to include those too.
Format
The dataset is organized as follows:
blobs.tar.zst: a Zst-compressed tarball containing deduplicated license blobs, one per file. The tarball contains 6’859’189 blobs, for a total uncompressed size on disk of 66 GiB.
The blobs are organized in a sharded directory structure that contains files named like blobs/86/24/8624bcdae55baeef00cd11d5dfcfa60f68710a02, where:
blobs/ is the root directory containing all license blobs
8624bcdae55baeef00cd11d5dfcfa60f68710a02 is the SHA1 checksum of a specific license blobs, a copy of the GPL3 license in this case. Each license blob is ultimately named with its SHA1:
$ head -n 3 blobs/86/24/8624bcdae55baeef00cd11d5dfcfa60f68710a02 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007
$ sha1sum blobs/86/24/8624bcdae55baeef00cd11d5dfcfa60f68710a02 8624bcdae55baeef00cd11d5dfcfa60f68710a02 blobs/86/24/8624bcdae55baeef00cd11d5dfcfa60f68710a02
86 and 24 are, respectively, the first and second group of two hex digits in the blob SHA1
One blob is missing, because its size (313MB) prevented its inclusion; (it was originally a tarball containing source code):
swh:1:cnt:61bf63793c2ee178733b39f8456a796b72dc8bde,1340d4e2da173c92d432026ecdc54b4859fe9911,"AUTHORS"
blobs-sample20k.tar.zst: analogous to blobs.tar.zst, but containing “only” 20’000 randomly selected license blobs
license-blobs.csv.zst a Zst-compressed CSV index of all the blobs in the dataset. Each line in the index (except the first one, which contains column headers) describes a license blob and is in the format SWHID,SHA1,NAME, for example:
swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2,8624bcdae55baeef00cd11d5dfcfa60f68710a02,"COPYING" swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2,8624bcdae55baeef00cd11d5dfcfa60f68710a02,"COPYING.GPL3" swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2,8624bcdae55baeef00cd11d5dfcfa60f68710a02,"COPYING.GLP-3"
where:
SWHID: the Software Heritage persistent identifier of the blob. It can be used to retrieve and cross-reference the license blob via the Software Heritage archive, e.g., at: https://archive.softwareheritage.org/swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2
SHA1: the blob SHA1, that can be used to cross-reference blobs in the blobs/ directory
NAME: a file name given to the license blob in a given software origin. As the same license blob can have different names in different contexts, the index contain multiple entries for the same blob with different names, as it is the case in the example above (yes, one of those has a typo in it, but it’s an original typo from some repository!).
blobs-fileinfo.csv.zst a Zst-compressed CSV mapping from blobs to basic file information in the format: SHA1,MIME_TYPE,ENCODING,LINE_COUNT,WORD_COUNT,SIZE, where:
SHA1: blob SHA1
MIME_TYPE: blob MIME type, as detected by libmagic
ENCODING: blob character encoding, as detected by libmagic
LINE_COUNT: number of lines in the blob (only for textual blobs with UTF8 encoding)
WORD_COUNT: number of words in the blob (only for textual blobs with UTF8 encoding)
SIZE: blob size in bytes
blobs-scancode.csv.zst a Zst-compressed CSV mapping from blobs to software license detected in them by ScanCode, in the format: SHA1,LICENSE,SCORE, where:
SHA1: blob SHA1
LICENSE: license detected in the blob, as an SPDX identifier (or ScanCode identifier for non-SPDX-indexed licenses)
SCORE: confidence score in the result, as a decimal number between 0 and 100
There may be zero or arbitrarily many lines for each blob.
blobs-scancode.ndjson.zst a Zst-compressed line-delimited JSON, containing a superset of the information in blobs-scancode.csv.zst. Each line is a JSON dictionary with three keys:
sha1: blob SHA1
licenses: output of scancode.api.get_licenses(..., min_score=0)
copyrights: output of scancode.api.get_copyrights(...)
There is exactly one line for each blob. licenses and copyrights keys are omitted for files not detected as plain text.
blobs-origins.csv.zst a Zst-compressed CSV mapping of where license blobs come from. Each line in the index associate a license blob to one of its origins in the format SWHIDURL, for example:
swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2 https://github.com/pombreda/Artemis
Note that a license blob can come from many different places, only an arbitrary (and somewhat random) one is listed in this mapping.
If no origin URL is found in the Software Heritage archive, then a blank is used instead. This happens when they were either being loaded when the dataset was generated, or the loader process crashed before completing the blob’s origin’s ingestion.
blobs-nb-origins.csv.zst a Zst-compressed CSV mapping of how many origins of this blob are known to Software Heritage. Each line in the index associate a license blob to this count in the format SWHIDNUMBER, for example:
swh:1:cnt:94a9ed024d3859793618152ea559a168bbcbb5e2 2822260
Two blobs are missing because the computation crashes:
swh:1:cnt:e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 swh:1:cnt:8b137891791fe96927ad78e64b0aad7bded08bdc
This issue will be fixed in a future version of the dataset
blobs-earliest.csv.zst a Zst-compressed CSV mapping from blobs to information about their (earliest) known occurence(s) in the archive. Format: SWHIDEARLIEST_SWHIDEARLIEST_TSOCCURRENCES, where:
SWHID: blob SWHID
EARLIEST_SWHID: SWHID of the earliest known commit containing the blob
EARLIEST_TS: timestamp of the earliest known commit containing the blob, as a Unix time integer
OCCURRENCES: number of known commits containing the blob
replication-package.tar.gz: code and scripts used to produce the dataset
licenses-annotated-sample.tar.gz: ground truth, i.e., manually annotated random sample of license blobs, with details about the kind of information they contain.
Changes since the 2021-03-23 dataset
More input data, due to the SWH archive growing: more origins in supported forges and package managers; and support for more forges and package managers. See the SWH Archive Changelog for details.
Values in the NAME column of license-blobs.csv.zst are quoted, as some file names now contain commas.
Replication package now contains all the steps needed to reproduce all artefacts including the licenseblobs/fetch.py script.
blobs-nb-origins.csv.zst is added.
blobs-origins.csv.zst is now generated using the first origin returned by swh-graph’s leaves endpoint, instead of its randomwalk endpoint. This should have no impact on the result, other than a different distribution of “random” origins being picked.
blobs-origins.csv.zst was missing ~10% of its results in previous versions of the dataset, due to errors and/or timeouts in its generation, this is now down to 0.02% (1254 of the 6859445 unique blobs). Blobs with no known origins are now present, with a blank instead of URL.
blobs-earliest.csv.zst was missing ~10% of its results in previous versions of the dataset. It is complete now.
blobs-scancode.csv.zst is generated with a newer scancode-toolkit version (31.2.1)
blobs-scancode.ndjson.zst is added.
Errata
A file name .tmp_1340d4e2da173c92d432026ecdc54b4859fe9911 was present in the initial version of the dataset (published on 2022-11-07). It was removed on 2022-11-09 using these two commands:
pv blobs-fileinfo.csv.zst | zstdcat | grep -v ".tmp" | zstd -19 pv blobs.tar.zst| zstdcat | tar --delete blobs/13/40/.tmp_1340d4e2da173c92d432026ecdc54b4859fe9911 | zstd -19 -T12
The total uncompressed size was announced as 84 GiB based on the physical size on ext4, but it is actually 66 GiB.
Citation
If you use this dataset for research purposes, please acknowledge its use by citing one or both of the following papers:
[pdf, bib] Jesús M. González-Barahona, Sergio Raúl Montes León, Gregorio Robles, Stefano Zacchiroli. The software heritage license dataset (2022 edition). Empirical Software Engineering, Volume 28, Number 6, Article number 147 (2023).
[pdf, bib] Stefano Zacchiroli. A Large-scale Dataset of (Open Source) License Text Variants. In proceedings of the 2022 Mining Software Repositories Conference (MSR 2022). 23-24 May 2022 Pittsburgh, Pennsylvania, United States. ACM 2022.
References
The dataset has been built using primarily the data sources described in the following papers:
[pdf, bib] Roberto Di Cosmo, Stefano Zacchiroli. Software Heritage: Why and How to Preserve Software Source Code. In Proceedings of iPRES 2017: 14th International Conference on Digital Preservation, Kyoto, Japan, 25-29 September 2017.
[pdf, bib] Antoine Pietri, Diomidis Spinellis, Stefano Zacchiroli. The Software Heritage Graph Dataset: Public software development under one roof. In proceedings of MSR 2019: The 16th International Conference on Mining Software Repositories, May 2019, Montreal, Canada. Pages 138-142, IEEE 2019.
Errata (v2, 2024-01-09)
licenses-annotated-sample.tar.gz: some comments not intended for publication were removed, and 4
Facebook
TwitterWARNING: This is a pre-release dataset and its fields names and data structures are subject to change. It should be considered pre-release until the end of 2024. Expected changes:Metadata is missing or incomplete for some layers at this time and will be continuously improved.We expect to update this layer roughly in line with CDTFA at some point, but will increase the update cadence over time as we are able to automate the final pieces of the process.This dataset is continuously updated as the source data from CDTFA is updated, as often as many times a month. If you require unchanging point-in-time data, export a copy for your own use rather than using the service directly in your applications.PurposeCounty and incorporated place (city) boundaries along with third party identifiers used to join in external data. Boundaries are from the authoritative source the California Department of Tax and Fee Administration (CDTFA), altered to show the counties as one polygon. This layer displays the city polygons on top of the County polygons so the area isn"t interrupted. The GEOID attribute information is added from the US Census. GEOID is based on merged State and County FIPS codes for the Counties. Abbreviations for Counties and Cities were added from Caltrans Division of Local Assistance (DLA) data. Place Type was populated with information extracted from the Census. Names and IDs from the US Board on Geographic Names (BGN), the authoritative source of place names as published in the Geographic Name Information System (GNIS), are attached as well. Finally, the coastline is used to separate coastal buffers from the land-based portions of jurisdictions. This feature layer is for public use.Related LayersThis dataset is part of a grouping of many datasets:Cities: Only the city boundaries and attributes, without any unincorporated areasWith Coastal BuffersWithout Coastal BuffersCounties: Full county boundaries and attributes, including all cities within as a single polygonWith Coastal BuffersWithout Coastal BuffersCities and Full Counties: A merge of the other two layers, so polygons overlap within city boundaries. Some customers require this behavior, so we provide it as a separate service.With Coastal Buffers (this dataset)Without Coastal BuffersPlace AbbreviationsUnincorporated Areas (Coming Soon)Census Designated Places (Coming Soon)Cartographic CoastlinePolygonLine source (Coming Soon)Working with Coastal BuffersThe dataset you are currently viewing includes the coastal buffers for cities and counties that have them in the authoritative source data from CDTFA. In the versions where they are included, they remain as a second polygon on cities or counties that have them, with all the same identifiers, and a value in the COASTAL field indicating if it"s an ocean or a bay buffer. If you wish to have a single polygon per jurisdiction that includes the coastal buffers, you can run a Dissolve on the version that has the coastal buffers on all the fields except COASTAL, Area_SqMi, Shape_Area, and Shape_Length to get a version with the correct identifiers.Point of ContactCalifornia Department of Technology, Office of Digital Services, odsdataservices@state.ca.govField and Abbreviation DefinitionsCOPRI: county number followed by the 3-digit city primary number used in the Board of Equalization"s 6-digit tax rate area numbering systemPlace Name: CDTFA incorporated (city) or county nameCounty: CDTFA county name. For counties, this will be the name of the polygon itself. For cities, it is the name of the county the city polygon is within.Legal Place Name: Board on Geographic Names authorized nomenclature for area names published in the Geographic Name Information SystemGNIS_ID: The numeric identifier from the Board on Geographic Names that can be used to join these boundaries to other datasets utilizing this identifier.GEOID: numeric geographic identifiers from the US Census Bureau Place Type: Board on Geographic Names authorized nomenclature for boundary type published in the Geographic Name Information SystemPlace Abbr: CalTrans Division of Local Assistance abbreviations of incorporated area namesCNTY Abbr: CalTrans Division of Local Assistance abbreviations of county namesArea_SqMi: The area of the administrative unit (city or county) in square miles, calculated in EPSG 3310 California Teale Albers.COASTAL: Indicates if the polygon is a coastal buffer. Null for land polygons. Additional values include "ocean" and "bay".GlobalID: While all of the layers we provide in this dataset include a GlobalID field with unique values, we do not recommend you make any use of it. The GlobalID field exists to support offline sync, but is not persistent, so data keyed to it will be orphaned at our next update. Use one of the other persistent identifiers, such as GNIS_ID or GEOID instead.AccuracyCDTFA"s source data notes the following about accuracy:City boundary changes and county boundary line adjustments filed with the Board of Equalization per Government Code 54900. This GIS layer contains the boundaries of the unincorporated county and incorporated cities within the state of California. The initial dataset was created in March of 2015 and was based on the State Board of Equalization tax rate area boundaries. As of April 1, 2024, the maintenance of this dataset is provided by the California Department of Tax and Fee Administration for the purpose of determining sales and use tax rates. The boundaries are continuously being revised to align with aerial imagery when areas of conflict are discovered between the original boundary provided by the California State Board of Equalization and the boundary made publicly available by local, state, and federal government. Some differences may occur between actual recorded boundaries and the boundaries used for sales and use tax purposes. The boundaries in this map are representations of taxing jurisdictions for the purpose of determining sales and use tax rates and should not be used to determine precise city or county boundary line locations. COUNTY = county name; CITY = city name or unincorporated territory; COPRI = county number followed by the 3-digit city primary number used in the California State Board of Equalization"s 6-digit tax rate area numbering system (for the purpose of this map, unincorporated areas are assigned 000 to indicate that the area is not within a city).Boundary ProcessingThese data make a structural change from the source data. While the full boundaries provided by CDTFA include coastal buffers of varying sizes, many users need boundaries to end at the shoreline of the ocean or a bay. As a result, after examining existing city and county boundary layers, these datasets provide a coastline cut generally along the ocean facing coastline. For county boundaries in northern California, the cut runs near the Golden Gate Bridge, while for cities, we cut along the bay shoreline and into the edge of the Delta at the boundaries of Solano, Contra Costa, and Sacramento counties.In the services linked above, the versions that include the coastal buffers contain them as a second (or third) polygon for the city or county, with the value in the COASTAL field set to whether it"s a bay or ocean polygon. These can be processed back into a single polygon by dissolving on all the fields you wish to keep, since the attributes, other than the COASTAL field and geometry attributes (like areas) remain the same between the polygons for this purpose.SliversIn cases where a city or county"s boundary ends near a coastline, our coastline data may cross back and forth many times while roughly paralleling the jurisdiction"s boundary, resulting in many polygon slivers. We post-process the data to remove these slivers using a city/county boundary priority algorithm. That is, when the data run parallel to each other, we discard the coastline cut and keep the CDTFA-provided boundary, even if it extends into the ocean a small amount. This processing supports consistent boundaries for Fort Bragg, Point Arena, San Francisco, Pacifica, Half Moon Bay, and Capitola, in addition to others. More information on this algorithm will be provided soon.Coastline CaveatsSome cities have buffers extending into water bodies that we do not cut at the shoreline. These include South Lake Tahoe and Folsom, which extend into neighboring lakes, and San Diego and surrounding cities that extend into San Diego Bay, which our shoreline encloses. If you have feedback on the exclusion of these items, or others, from the shoreline cuts, please reach out using the contact information above.Offline UseThis service is fully enabled for sync and export using Esri Field Maps or other similar tools. Importantly, the GlobalID field exists only to support that use case and should not be used for any other purpose (see note in field descriptions).Updates and Date of ProcessingConcurrent with CDTFA updates, approximately every two weeks, Last Processed: 12/17/2024 by Nick Santos using code path at https://github.com/CDT-ODS-DevSecOps/cdt-ods-gis-city-county/ at commit 0bf269d24464c14c9cf4f7dea876aa562984db63. It incorporates updates from CDTFA as of 12/12/2024. Future updates will include improvements to metadata and update frequency.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This collection contains a the datasets created as part of a masters thesis. The collection consists of two datasets in two forms as well as the corresponding entity descriptions for each of the datasets.The experiment_doc_labels_clean documents contain the data used for the experiments. The JSON file consists of a list of JSON objects. The JSON objects contain the following fields: id: Document idner_tags: List of IOB tags indicating mention boundaries based on the majority label assigned using crowdsourcing.el_tags: List of entity ids based on the majority label assigned using crowdsourcing.all_ner_tags: List of lists of IOB tags assigned by each of the users.all_el_tags: List of lists of entity IDs assigned by each of the users annotating the data.tokens: List of tokens from the text.The experiment_doc_labels_clean-U.tsv contains the dataset used for the experiments but in in a format similar to the CoNLL-U format. The first line for each document contains the document ID. The documents are separated by a blank line. Each word in a document is on its own line consisting of the word the IOB tag and the entity id separated by tags.While the experiments were being completed the annotation system was left open until all the documents had been annotated by three users. This resulted in the all_docs_complete_labels_clean.json and all_docs_complete_labels_clean-U.tsv datasets. The all_docs_complete_labels_clean.json and all_docs_complete_labels_clean-U.tsv documents take the same form as the experiment_doc_labels_clean.json and experiment_doc_labels_clean-U.tsv.Each of the documents described above contain an entity id. The IDs match to the entities stored in the entity_descriptions CSV files. Each of row in these files corresponds to a mention for an entity and take the form:{ID}${Mention}${Context}[N]Three sets of entity descriptions are available:1. entity_descriptions_experiments.csv: This file contains all the mentions from the subset of the data used for the experiments as described above. However, the data has not been cleaned so there are multiple entity IDs which actually refer to the same entity.2. entity_descriptions_experiments_clean.csv: These entities also cover the data used for the experiments, however, duplicate entities have been merged. These entities correspond to the labels for the documents in the experiment_doc_labels_clean files.3. entity_descriptions_all.csv: The entities in this file correspond to the data in the all_docs_complete_labels_clean. Please note that the entities have not been cleaned so there may be duplicate or incorrect entities.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A collection of datasets and python scripts for extraction and analysis of isograms (and some palindromes and tautonyms) from corpus-based word-lists, specifically Google Ngram and the British National Corpus (BNC).Below follows a brief description, first, of the included datasets and, second, of the included scripts.1. DatasetsThe data from English Google Ngrams and the BNC is available in two formats: as a plain text CSV file and as a SQLite3 database.1.1 CSV formatThe CSV files for each dataset actually come in two parts: one labelled ".csv" and one ".totals". The ".csv" contains the actual extracted data, and the ".totals" file contains some basic summary statistics about the ".csv" dataset with the same name.The CSV files contain one row per data point, with the colums separated by a single tab stop. There are no labels at the top of the files. Each line has the following columns, in this order (the labels below are what I use in the database, which has an identical structure, see section below):
Label Data type Description
isogramy int The order of isogramy, e.g. "2" is a second order isogram
length int The length of the word in letters
word text The actual word/isogram in ASCII
source_pos text The Part of Speech tag from the original corpus
count int Token count (total number of occurences)
vol_count int Volume count (number of different sources which contain the word)
count_per_million int Token count per million words
vol_count_as_percent int Volume count as percentage of the total number of volumes
is_palindrome bool Whether the word is a palindrome (1) or not (0)
is_tautonym bool Whether the word is a tautonym (1) or not (0)
The ".totals" files have a slightly different format, with one row per data point, where the first column is the label and the second column is the associated value. The ".totals" files contain the following data:
Label
Data type
Description
!total_1grams
int
The total number of words in the corpus
!total_volumes
int
The total number of volumes (individual sources) in the corpus
!total_isograms
int
The total number of isograms found in the corpus (before compacting)
!total_palindromes
int
How many of the isograms found are palindromes
!total_tautonyms
int
How many of the isograms found are tautonyms
The CSV files are mainly useful for further automated data processing. For working with the data set directly (e.g. to do statistics or cross-check entries), I would recommend using the database format described below.1.2 SQLite database formatOn the other hand, the SQLite database combines the data from all four of the plain text files, and adds various useful combinations of the two datasets, namely:• Compacted versions of each dataset, where identical headwords are combined into a single entry.• A combined compacted dataset, combining and compacting the data from both Ngrams and the BNC.• An intersected dataset, which contains only those words which are found in both the Ngrams and the BNC dataset.The intersected dataset is by far the least noisy, but is missing some real isograms, too.The columns/layout of each of the tables in the database is identical to that described for the CSV/.totals files above.To get an idea of the various ways the database can be queried for various bits of data see the R script described below, which computes statistics based on the SQLite database.2. ScriptsThere are three scripts: one for tiding Ngram and BNC word lists and extracting isograms, one to create a neat SQLite database from the output, and one to compute some basic statistics from the data. The first script can be run using Python 3, the second script can be run using SQLite 3 from the command line, and the third script can be run in R/RStudio (R version 3).2.1 Source dataThe scripts were written to work with word lists from Google Ngram and the BNC, which can be obtained from http://storage.googleapis.com/books/ngrams/books/datasetsv2.html and [https://www.kilgarriff.co.uk/bnc-readme.html], (download all.al.gz).For Ngram the script expects the path to the directory containing the various files, for BNC the direct path to the *.gz file.2.2 Data preparationBefore processing proper, the word lists need to be tidied to exclude superfluous material and some of the most obvious noise. This will also bring them into a uniform format.Tidying and reformatting can be done by running one of the following commands:python isograms.py --ngrams --indir=INDIR --outfile=OUTFILEpython isograms.py --bnc --indir=INFILE --outfile=OUTFILEReplace INDIR/INFILE with the input directory or filename and OUTFILE with the filename for the tidied and reformatted output.2.3 Isogram ExtractionAfter preparing the data as above, isograms can be extracted from by running the following command on the reformatted and tidied files:python isograms.py --batch --infile=INFILE --outfile=OUTFILEHere INFILE should refer the the output from the previosu data cleaning process. Please note that the script will actually write two output files, one named OUTFILE with a word list of all the isograms and their associated frequency data, and one named "OUTFILE.totals" with very basic summary statistics.2.4 Creating a SQLite3 databaseThe output data from the above step can be easily collated into a SQLite3 database which allows for easy querying of the data directly for specific properties. The database can be created by following these steps:1. Make sure the files with the Ngrams and BNC data are named “ngrams-isograms.csv” and “bnc-isograms.csv” respectively. (The script assumes you have both of them, if you only want to load one, just create an empty file for the other one).2. Copy the “create-database.sql” script into the same directory as the two data files.3. On the command line, go to the directory where the files and the SQL script are. 4. Type: sqlite3 isograms.db 5. This will create a database called “isograms.db”.See the section 1 for a basic descript of the output data and how to work with the database.2.5 Statistical processingThe repository includes an R script (R version 3) named “statistics.r” that computes a number of statistics about the distribution of isograms by length, frequency, contextual diversity, etc. This can be used as a starting point for running your own stats. It uses RSQLite to access the SQLite database version of the data described above.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
WARNING: This dataset contains errors - please refer to the new version instead.
This dataset provides a correspondence between ISNI and Ringgold identifiers, by combining two datasets:
The alignment between the two datasets was performed by exact matching on tuples of (name,city,region,country). This very conservative matching succeeds for about 75% of the 400 000 institutions covered: this high matching rate can be explained by the fact that both datasets come from the same database. The matching does not always succeed, probably because version differences between the two datasets.
Structure of the dataset
License
Quoting Ringgold: "The use of the ISNI data contained herein is completely open and you may utilise and share the ISNI identifiers as you see fit."
Quoting ORCID: "Per our agreement with Ringgold, we are allowed to share the Ringgold identifiers and limited metadata (organization name, location) under CC0 license, just as the rest of ORCID data are available. We would not be using Ringgold otherwise. If someone gets a Ringgold ID out of ORCID, they are free to use it."
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Wind Spacecraft:
The Wind spacecraft (https://wind.nasa.gov) was launched on November 1, 1994 and currently orbits the first Lagrange point between the Earth and sun. A comprehensive review can be found in Wilson et al. [2021]. It holds a suite of instruments from gamma ray detectors to quasi-static magnetic field instruments, Bo. The instruments used for this data product are the fluxgate magnetometer (MFI) [Lepping et al., 1995] and the radio receivers (WAVES) [Bougeret et al., 1995]. The MFI measures 3-vector Bo at ~11 samples per second (sps); WAVES observes electromagnetic radiation from ~4 kHz to >12 MHz which provides an observation of the upper hybrid line (also called the plasma line) used to define the total electron density and also takes time series snapshot/waveform captures of electric and magnetic field fluctuations, called TDS bursts herein.
WAVES Instrument:
The WAVES experiment [Bougeret et al., 1995] on the Wind spacecraft is composed of three orthogonal electric field antenna and three orthogonal search coil magnetometers. The electric fields are measured through five different receivers: Low Frequency FFT receiver called FFT (0.3 Hz to 11 kHz), Thermal Noise Receiver called TNR (4-256 kHz), Radio receiver band 1 called RAD1 (20-1040 kHz), Radio receiver band 2 called RAD2 (1.075-13.825 MHz), and the Time Domain Sampler (TDS). The electric field antenna are dipole antennas with two orthogonal antennas in the spin plane and one spin axis stacer antenna.
The TDS receiver allows one to examine the electromagnetic waves observed by Wind as time series waveform captures. There are two modes of operation, TDS Fast (TDSF) and TDS Slow (TDSS). TDSF returns 2048 data points for two channels of the electric field, typically Ex and Ey (i.e. spin plane components), with little to no gain below ~120 Hz (the data herein has been high pass filtered above ~150 Hz for this reason). TDSS returns four channels with three electric(magnetic) field components and one magnetic(electric) component. The search coils show a gain roll off ~3.3 Hz [e.g., see Wilson et al., 2010; Wilson et al., 2012; Wilson et al., 2013 and references therein for more details].
The original calibration of the electric field antenna found that the effective antenna lengths are roughly 41.1 m, 3.79 m, and 2.17 m for the X, Y, and Z antenna, respectively. The +Ex antenna was broken twice during the mission as of June 26, 2020. The first break occurred on August 3, 2000 around ~21:00 UTC and the second on September 24, 2002 around ~23:00 UTC. These breaks reduced the effective antenna length of Ex from ~41 m to 27 m after the first break and ~25 m after the second break [e.g., see Malaspina et al., 2014; Malaspina & Wilson, 2016].
TDS Bursts:
TDS bursts are waveform captures/snapshots of electric and magnetic field data. The data is triggered by the largest amplitude waves which exceed a specific threshold and are then stored in a memory buffer. The bursts are ranked according to a quality filter which mostly depends upon amplitude. Due to the age of the spacecraft and ubiquity of large amplitude electromagnetic and electrostatic waves, the memory buffer often fills up before dumping onto the magnetic tape drive. If the memory buffer is full, then the bottom ranked TDS burst is erased every time a new TDS burst is sampled. That is, the newest TDS burst sampled by the instrument is always stored and if it ranks higher than any other in the list, it will be kept. This results in the bottom ranked burst always being erased. Earlier in the mission, there were also so called honesty bursts, which were taken periodically to test whether the triggers were working properly. It was found that the TDSF triggered properly, but not the TDSS. So the TDSS was set to trigger off of the Ex signals.
A TDS burst from the Wind/WAVES instrument is always 2048 time steps for each channel. The sample rate for TDSF bursts ranges from 1875 samples/second (sps) to 120,000 sps. Every TDS burst is marked a unique set of numbers (unique on any given date) to help distinguish it from others and to ensure any set of channels are appropriately connected to each other. For instance, during one spacecraft downlink interval there may be 95% of the TDS bursts with a complete set of channels (i.e., TDSF has two channels, TDSS has four) while the remaining 5% can be missing channels (just example numbers, not quantitatively accurate). During another downlink interval, those missing channels may be returned if they are not overwritten. During every downlink, the flight operations team at NASA Goddard Space Fligth Center (GSFC) generate level zero binary files from the raw telemetry data. Those files are filled with data received on that date and the file name is labeled with that date. There is no attempt to sort chronologically the data within so any given level zero file can have data from multiple dates within. Thus, it is often necessary to load upwards of five days of level zero files to find as many full channel sets as possible. The remaining unmatched channel sets comprise a much smaller fraction of the total.
All data provided here are from TDSF, so only two channels. Most of the time channel 1 will be associated with the Ex antenna and channel 2 with the Ey antenna. The data are provided in the spinning instrument coordinate basis with associated angles necessary to rotate into a physically meaningful basis (e.g., GSE).
TDS Time Stamps:
Each TDS burst is tagged with a time stamp called a spacecraft event time or SCET. The TDS datation time is sampled after the burst is acquired which requires a delay buffer. The datation time requires two corrections. The first correction arises from tagging the TDS datation with an associated spacecraft major frame in house keeping (HK) data. The second correction removes the delay buffer duration. Both inaccuracies are essentially artifacts of on ground derived values in the archives created by the WINDlib software (K. Goetz, Personal Communication, 2008) found at https://github.com/lynnbwilsoniii/Wind_Decom_Code.
The WAVES instrument's HK mode sends relevant low rate science back to ground once every spacecraft major frame. If multiple TDS bursts occur in the same major frame, it is possible for the WINDlib software to assign them the same SCETs. The reason being that this top-level SCET is only accurate to within +300 ms (in 120,000 sps mode) due to the issues described above (at lower sample rates, the error can be slightly larger). The time stamp uncertainty is a positive definite value because it results from digitization rounding errors. One can correct these issues to within +10 ms if using the proper HK data.
*** The data stored here have not corrected the SCETs! ***
The 300 ms uncertainty, due to the HK corrections mentioned above, results from WINDlib trying to recreate the time stamp after it has been telemetered back to ground. If a burst stays in the TDS buffer for extended periods of time (i.e., >2 days), the interpolation done by WINDlib can make mistakes in the 11th significant digit. The positive definite nature of this uncertainty is due to rounding errors associated with the onboard DPU (digital processing unit) clock rollover. The DPU clock is a 24 bit integer clock sampling at ∼50,018.8 Hz. The clock rolls over at ∼5366.691244092221 seconds, i.e., (16*224)/50,018.8. The sample rate is a temperature sensitive issue and thus subject to change over time. From a sample of 384 different points on 14 different days, a statistical estimate of the rollover time is 5366.691124061162 ± 0.000478370049 seconds (calculated by Lynn B. Wilson III, 2008). Note that the WAVES instrument team used UR8 times, which are the number of 86,400 second days from 1982-01-01/00:00:00.000 UTC.
The method to correct the SCETs to within +10 ms, were one to do so, is given as follows:
Retrieve the DPU clock times, SCETs, UR8 times, and DPU Major Frame Numbers from the WINDlib libraries on the VAX/ALPHA systems for the TDSS(F) data of interest.
Retrieve the same quantities from the HK data.
Match the HK event number with the same DPU Major Frame Number as the TDSS(F) burst of interest.
Find the difference in DPU clock times between the TDSS(F) burst of interest and the HK event with matching major frame number (Note: The TDSS(F) DPU clock time will always be greater than the HK DPU clock if they are the same DPU Major Frame Number and the DPU clock has not rolled over).
Convert the difference to a UR8 time and add this to the HK UR8 time. The new UR8 time is the corrected UR8 time to within +10 ms.
Find the difference between the new UR8 time and the UR8 time WINDlib associates with the TDSS(F) burst. Add the difference to the DPU clock time assigned by WINDlib to get the corrected DPU clock time (Note: watch for the DPU clock rollover).
Convert the new UR8 time to a SCET using either the IDL WINDlib libraries or TMLib (STEREO S/WAVES software) libraries of available functions. This new SCET is accurate to within +10 ms.
One can find a UR8 to UTC conversion routine at https://github.com/lynnbwilsoniii/wind_3dp_pros in the ~/LYNN_PRO/Wind_WAVES_routines/ folder.
Examples of good waveforms can be found in the notes PDF at https://wind.nasa.gov/docs/wind_waves.pdf.
Data Set Description
Each Zip file contains 300+ IDL save files; one for each day of the year with available data. This data set is not complete as the software used to retrieve and calibrate these TDS bursts did not have sufficient error handling to handle some of the more nuanced bit errors or major frame errors in some of the level zero files. There is currently (as of June 27, 2020) an effort (by Keith Goetz et al.) to generate the entire TDSF and TDSS data set in one repository to be put on SPDF/CDAWeb as CDF files. Once that data set is available, it will supercede
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This package supplements the paper entitled "A half-century of global collaboration in science and the 'Shrinking World'" published in Quantitative Science Studies (doi: 10.1162/qss_a_00268). It contains the datasets and figures used in the original paper based on bibliometric data from a broad set of scientific publications (works), including journal articles, preprints and datasets; see the subfolder named "all_works". In addition, for reference, it also contains datasets and figures based on bibliometric data from journal articles only; see the subfolder named "journal_only". The bottom-level files in this subfolder are suffixed with '_j' for identification.
Contents and Instructions The datasets and figures in this package are based on the data obtained via OpenAlex API. See the original paper for details. The following file and folders are found at the next level of the subfolders named "all_works" or "journal_only".
This file contains information on the number of works ('nworks_all') produced in each of the 15 research disciplines ('discipline' and 'disc_ID'; see below) by 18 countries (Australia, Canada, China, France, Germany, India, Indonesia, Iran, Italy, Japan, Netherlands, Poland, Russia, South Korea, Spain, Switzerland, UK and US) ('country' and 'country_code') from 1970 to 2021 ('year'), the number of international collaborative works among them ('nworks_intl'), and the international collaboration rate ('intlrate') calculated from the ratio of the two. The 15 disciplines are Artificial Intelligence ('disc_ID' = 1; 'ai'), Quantum Science (2; 'quantum'), Biotechnology (3; 'bio'), Nanotechnology (4; 'nano'), Agricultural Engineering (5; 'agri'), Particle Physics (6; 'particle'), Aerospace Engineering (7; 'aerospace'), Nuclear Engineering (8; 'nuclear'), Marine Engineering (9; 'marine'), Neuroscience (10; 'neuro'), Condensed Matter Physics (11; 'condensed'), Environmental Engineering (12; 'envi'), Earth Science (13; 'earth'), Astronomy (14; 'astro') and Pure Mathematics (15; 'math'). See the original paper for the definitions of these disciplines. The figures contained in the folders '[line]_nworks' and '[line]_intlrate' are based on this dataset.
This folder contains line plots (.pdf/.png) representing the trends in the number of works by discipline and country, corresponding to the left-hand side diagrams of Fig. 1 and Suppl. Fig. S2 in the v1 preprint.
This folder contains line plots (.pdf/.png) representing the trends in the international collaboration rate by discipline and country, corresponding to the right-hand side diagrams of Fig. 1 and Suppl. Fig. S2 in the v1 preprint.
This folder contains chord diagrams (.pdf/.png) representing the bilateral collaborative relationships by discipline and period, corresponding to Fig. 2 and Suppl. Fig. S4 in the v1 preprint. The number at the end of the file name indicates the period represented by the diagram; specifically, '1' = 1971–1990, '2' = 1991–2000, '3' = 2001–2010 and '4' = 2011–2020. The raw data (.xlsx) to reproduce the contained diagrams are also provided by discipline in the accompanied 'Data' folder. The file named '[list]_nworks_(discipline name).xlsx' shows, for the top 30 countries ('country' and 'country_code') in work production during the period indicated by the sheet name, their work production ('nworks_all'), the number of international collaborative works among them ('nworks_intl'), and the international collaboration rate ('intlrate') calculated from the ratio of the two. The file named '[mat]_bilat_nworks_(discipline name)' shows the number of works produced by each country pair during the period indicated by the sheet name. Country names are abbreviated by two-letter country codes (ISO 3166-1 alpha-2).
This folder contains circularised dendrograms (.pdf/.png) representing the international research collaboration clusters by discipline and period, corresponding to Fig. 3 and Suppl. Fig. S5 in the v1 preprint. The raw data (.xlsx) to reproduce the contained diagrams are also provided by discipline in the accompanied 'Data' folder. The file named '[mat]_bilat_dist_(discipline name)' shows the distance between each country pair for the period indicated by the sheet name, calculated based on the formula presented in the original paper. Country names are abbreviated by two-letter country codes (ISO 3166-1 alpha-2).
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Within the central repository, there are subfolders of different categories. Each of these subfolders contains both images and their corresponding transcriptions, saved as .txt files. As an example, the folder 'summary-based-0001-0055' encompasses 55 handwritten image documents pertaining to the summary task, with the images ranging from 0001 to 0055 within this category. In the transcription files, any crossed-out content is denoted by the '#' symbol, facilitating the easy identification of files with or without such modifications.
Moreover, there exists a document detailing the transcription rules utilized for transcribing the dataset. Following these guidelines will enable the seamless addition of more images.
We have incorporated contributions from more than 500 students to construct the dataset. Handwritten examination papers are primary sources in academic institutes to assess student learning. In our experience as academics, we have found that student examination papers tend to be messy with all kinds of insertions and corrections and would thus be a great source of documents for investigating HTR in the wild. Unfortunately, student examination papers are not available due to ethical considerations. So, we created an exam-like situation to collect handwritten samples from students. The corpus of the collected data is academic-based. Usually, in academia, handwritten papers have lines in them. For this purpose, we drew lines using light colors on white paper. The height of a line is 1.5 pt and the space between two lines is 40 pt. The filled handwritten documents were scanned at a resolution of 300 dpi at a grey-level resolution of 8 bits.
In the second exercise, we asked participants to write an essay from a given list of topics, or they could write on any topic of their choice. We called it an essay-based dataset. This dataset is collected from 250 High school students. We gave them 30 minutes to think about the topic and write for this task.
In the third exercise, we select participants from different subjects and ask them to write on a topic from their current study. We called it a subject-based dataset. For this study, we used undergraduate students from different subjects, including 33 students from Mathematics, 71 from Biological Sciences, 24 from Environmental Sciences, 17 from Physics, and more than 84 from English studies.
Finally a class-notes dataset, we have collected class notes from almost 31 students on the same topic. We asked students to take notes of every possible sentence the speaker delivered during the lecture. After finishing the lesson in almost 10 minutes, we asked students to recheck their notes and compare them with other classmates. We did not impose any time restrictions for rechecking. We observed more cross-outs and corrections in class-notes compared to summary-based and academic-based collections.
In all four exercises, we did not impose any rules on them, for example, spacing, usage of a pen, etc. We asked them to cross out the text if it seemed inappropriate. Although usually writers made corrections in a second read, we also gave an extra 5 minutes for correction purposes.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionUK Power Network maintains the 132kV voltage level network and below. An important part of the distribution network is the stepping down of voltage as it is moved towards the household; this is achieved using transformers. Transformers have a maximum rating for the utilisation of these assets based upon protection, overcurrent, switch gear, etc. This dataset contains the Grid Substation Transformers, also known as Bulk Supply Points, that typically step-down voltage from 132kV to 33kV (occasionally down to 66 or more rarely 20-25). These transformers can be viewed on the single line diagrams in our Long-Term Development Statements (LTDS) and the underlying data is then found in the LTDS tables.Care is taken to protect the private affairs of companies connected to the 33kV network, resulting in the redaction of certain transformers. Where redacted, we provide monthly statistics to continue to add value where possible. Where monthly statistics exist but half-hourly is absent, this data has been redacted.This dataset provides monthly statistics data across these named transformers from 2021 through to the previous month across our license areas. The data are aligned with the same naming convention as the LTDS for improved interoperability.To find half-hourly current and power flow data for a transformer, use the ‘tx_id’ that can be cross referenced in the Grid Transformers Half Hourly Dataset.If you want to download all this data, it is perhaps more convenient from our public sharepoint: Open Data Portal Library - Grid Transformers - All Documents (sharepoint.com)This dataset is part of a larger endeavour to share more operational data on UK Power Networks assets. Please visit our Network Operational Data Dashboard for more operational datasets.Methodological ApproachThe dataset is not derived, it is the measurements from our network stored in our historian.The measurement devices are taken from current transformers attached to the cable at the circuit breaker, and power is derived combining this with the data from voltage transformers physically attached to the busbar. The historian stores datasets based on a report-by-exception process, such that a certain deviation from the present value must be reached before logging a point measurement to the historian. We extract the data following a 30-min time weighted averaging method to get half-hourly values. Where there are no measurements logged in the period, the data provided is blank; due to the report-by-exception process, it may be appropriate to forward fill this data for shorter gaps.We developed a data redactions process to protect the privacy or companies according to the Utilities Act 2000 section 105.1.b, which requires UK Power Networks to not disclose information relating to the affairs of a business. For this reason, where the demand of a private customer is derivable from our data and that data is not already public information (e.g., data provided via Elexon on the Balancing Mechanism), we redact the half-hourly time series, and provide only the monthly averages. This redaction process considers the correlation of all the data, of only corresponding periods where the customer is active, the first order difference of all the data, and the first order difference of only corresponding periods where the customer is active. Should any of these four tests have a high linear correlation, the data is deemed redacted. This process is not simply applied to only the circuit of the customer, but of the surrounding circuits that would also reveal the signal of that customer.The directionality of the data is not consistent within this dataset. Where directionality was ascertainable, we arrange the power data in the direction of the LTDS "from node" to the LTDS "to node". Measurements of current do not indicate directionality and are instead positive regardless of direction. In some circumstances, the polarity can be negative, and depends on the data commissioner's decision on what the operators in the control room might find most helpful in ensuring reliable and secure network operation.Quality Control StatementThe data is provided "as is". In the design and delivery process adopted by the DSO, customer feedback and guidance is considered at each phase of the project. One of the earliest steers was that raw data was preferable. This means that we do not perform prior quality control screening to our raw network data. The result of this decision is that network rearrangements and other periods of non-intact running of the network are present throughout the dataset, which has the potential to misconstrue the true utilisation of the network, which is determined regulatorily by considering only by in-tact running arrangements. Therefore, taking the maximum or minimum of these transformers are not a reliable method of correctly ascertaining the true utilisation. This does have the intended added benefit of giving a realistic view of how the network was operated. The critical feedback was that our customers have a desire to understand what would have been the impact to them under real operational conditions. As such, this dataset offers unique insight into that.Assurance StatementCreating this dataset involved a lot of human data imputation. At UK Power Networks, we have differing software to run the network operationally (ADMS) and to plan and study the network (PowerFactory). The measurement devices are intended to primarily inform the network operators of the real time condition of the network, and importantly, the network drawings visible in the LTDS are a planning approach, which differs to the operational. To compile this dataset, we made the union between the two modes of operating manually. A team of data scientists, data engineers, and power system engineers manually identified the LTDS transformer from the single line diagram, identified the line name from LTDS Table 2a/b, then identified the same transformer in ADMS to identify the measurement data tags. This was then manually inputted to a spreadsheet. Any influential customers to that circuit were noted using ADMS and the single line diagrams. From there, a python code is used to perform the triage and compilation of the datasets. There is potential for human error during the manual data processing. These issues can include missing transformers, incorrectly labelled transformers, incorrectly identified measurement data tags, incorrectly interpreted directionality. Whilst care has been taken to minimise the risk of these issues, they may persist in the provided dataset. Any uncertain behaviour observed by using this data should be reported to allow us to correct as fast as possible.Additional informationDefinitions of key terms related to this dataset can be found in the Open Data Portal Glossary.Download dataset information: Metadata (JSON)We would be grateful if you find this dataset useful to submit a “reuse” case study to tell us what you did and how you used it. This enables us to drive our direction and gain better understanding for how we improve our data offering in the future. Click here for more information: Open Data Portal Reuses — UK Power NetworksTo view this data please register and login.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This is the line version of student messy hand written dataset (SMHD) (Nisa, Hiqmat; Thom, James; ciesielski, Vic; Tennakoon, Ruwan (2023). Student Messy Handwritten Dataset (SMHD) . RMIT University. Dataset. https://doi.org/10.25439/rmt.24312715.v1).Within the central repository, there are subfolders of each document converted into lines. All images are in .png format. In the main folder there are three .txt files.1)SMHD.txt contain all the line level transcription in the form of image name, threshold value, label 0001-000,178 Bombay Phenotype :- 2) SMHD-Cross-outsandInsertions.txt contains all the line images from the dataset having crossed-out and inserted text. 3)Class_Notes_SMHD.txt contains more complex cases with cross-outs, insertions and overwriting. This can be used as a test set. The images in this files does not included in the SMHD.txt. In the transcription files, any crossed-out content is denoted by the '#' symbol, facilitating the easy identification of files with or without such modifications.Dataset Description:We have incorporated contributions from more than 500 students to construct the dataset. Handwritten examination papers are primary sources in academic institutes to assess student learning. In our experience as academics, we have found that student examination papers tend to be messy with all kinds of insertions and corrections and would thus be a great source of documents for investigating HTR in the wild. Unfortunately, student examination papers are not available due to ethical considerations. So, we created an exam-like situation to collect handwritten samples from students. The corpus of the collected data is academic-based. Usually, in academia, handwritten papers have lines in them. For this purpose, we drew lines using light colors on white paper. The height of a line is 1.5 pt and the space between two lines is 40 pt. The filled handwritten documents were scanned at a resolution of 300 dpi at a grey-level resolution of 8 bits.Collection Process: The collection process was done in four different ways. In the first exercise, we asked participants to summarize a given text in their own words. We called it a summary-based dataset. In the summary writing task, we included 60 undergraduate students studying the English language as a subject. After getting their consent, we distributed printed text articles and we asked them to choose one article, read it and summarize it in a paragraph in 15 minutes. The corpus of the printed text articles given to the participants was collected from the Internet on different topics. The articles were related to current political situations, daily life activities, and the Covid-19 pandemic.In the second exercise, we asked participants to write an essay from a given list of topics, or they could write on any topic of their choice. We called it an essay-based dataset. This dataset is collected from 250 High school students. We gave them 30 minutes to think about the topic and write for this task.In the third exercise, we select participants from different subjects and ask them to write on a topic from their current study. We called it a subject-based dataset. For this study, we used undergraduate students from different subjects, including 33 students from Mathematics, 71 from Biological Sciences, 24 from Environmental Sciences, 17 from Physics, and more than 84 from English studies.Finally a class-notes dataset, we have collected class notes from almost 31 students on the same topic. We asked students to take notes of every possible sentence the speaker delivered during the lecture. After finishing the lesson in almost 10 minutes, we asked students to recheck their notes and compare them with other classmates. We did not impose any time restrictions for rechecking. We observed more cross-outs and corrections in class-notes compared to summary-based and academic-based collections.In all four exercises, we did not impose any rules on them, for example, spacing, usage of a pen, etc. We asked them to cross out the text if it seemed inappropriate. Although usually writers made corrections in a second read, we also gave an extra 5 minutes for correction purposes.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Google's AudioSet consistently reformatted During my work with Google's AudioSet(https://research.google.com/audioset/index.html) I encountered some problems due to the fact that Weak (https://research.google.com/audioset/download.html) and Strong (https://research.google.com/audioset/download_strong.html) versions of the dataset used different csv formatting for the data, and that also labels used in the two datasets are different (https://github.com/audioset/ontology/issues/9) and also presented in files with different formatting. This dataset reformatting aims to unify the formats of the datasets so that it is possible to analyse them in the same pipelines, and also make the dataset files compatible with psds_eval, dcase_util and sed_eval Python packages used in Audio Processing. For better formatted documentation and source code of reformatting refer to https://github.com/bakhtos/GoogleAudioSetReformatted -Changes in dataset All files are converted to tab-separated `*.tsv` files (i.e. `csv` files with `\t` as a separator). All files have a header as the first line. -New fields and filenames Fields are renamed according to the following table, to be compatible with psds_eval: Old field -> New field YTID -> filename segment_id -> filename start_seconds -> onset start_time_seconds -> onset end_seconds -> offset end_time_seconds -> offset positive_labels -> event_label label -> event_label present -> present For class label files, `id` is now the name for the for `mid` label (e.g. `/m/09xor`) and `label` for the human-readable label (e.g. `Speech`). Index of label indicated for Weak dataset labels (`index` field in `class_labels_indices.csv`) is not used. Files are renamed according to the following table to ensure consisted naming of the form `audioset_[weak|strong]_[train|eval]_[balanced|unbalanced|posneg]*.tsv`: Old name -> New name balanced_train_segments.csv -> audioset_weak_train_balanced.tsv unbalanced_train_segments.csv -> audioset_weak_train_unbalanced.tsv eval_segments.csv -> audioset_weak_eval.tsv audioset_train_strong.tsv -> audioset_strong_train.tsv audioset_eval_strong.tsv -> audioset_strong_eval.tsv audioset_eval_strong_framed_posneg.tsv -> audioset_strong_eval_posneg.tsv class_labels_indices.csv -> class_labels.tsv (merged with mid_to_display_name.tsv) mid_to_display_name.tsv -> class_labels.tsv (merged with class_labels_indices.csv) -Strong dataset changes Only changes to the Strong dataset are renaming of fields and reordering of columns, so that both Weak and Strong version have `filename` and `event_label` as first two columns. -Weak dataset changes -- Labels are given one per line, instead of comma-separated and quoted list -- To make sure that `filename` format is the same as in Strong version, the following format change is made: The value of the `start_seconds` field is converted to milliseconds and appended to the `filename` with an underscore. Since all files in the dataset are assumed to be 10 seconds long, this unifies the format of `filename` with the Strong version and makes `end_seconds` also redundant. -Class labels changes Class labels from both datasets are merged into one file and given in alphabetical order of `id`s. Since same `id`s are present in both datasets, but sometimes with different human-readable labels, labels from Strong dataset overwrite those from Weak. It is possible to regenerate `class_labels.tsv` while giving priority to the Weak version of labels by calling `convert_labels(False)` from convert.py in the GitHub repository. -License Google's AudioSet was published in two stages - first the Weakly labelled data (Gemmeke, Jort F., et al. "Audio set: An ontology and human-labeled dataset for audio events." 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 2017.), then the strongly labelled data (Hershey, Shawn, et al. "The benefit of temporally-strong labels in audio event classification." ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021.) Both the original dataset and this reworked version are licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
Class labels come from the AudioSet Ontology, which is licensed under CC BY-SA 4.0.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
By data.world's Admin [source]
This dataset contains vital information about insurance producers in Illinois. It includes detailed and comprehensive data relating to the last name or business name, first name, mailing address line 1 and 2, city, state, zip code and Line of Authority (LOA). This extensive dataset is a great source of information for researchers who are interested in understanding the insurance production industry in Illinois. With its up-to-date data points that cover all aspects of the insurance producer market in Illinois, this dataset can be used by individuals to make informed decisions about the insurance risk coverage options within their state. The set is timely updated so you can be sure that you are getting an accurate picture of the market landscape today!
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
This dataset contains information about insurance producers in Illinois, which includes their last name or business name, first name, mailing address, city, state, zip code and Line of Authority. With this data set you can use it to get a better understanding of the insurance industry in Illinois and learn more about the population of producers.
Guidelines for using this dataset
- Inspect the data: Before starting your analysis take some time to go over all columns included in the dataset and make sure they are understandable and relevant to your objectives.
- Cleaning the Data: Depending on your needs you may find it necessary to clean up and/or transform some of the data so that you can more easily analyze it. Take caution when cleaning or transforming as any changes may have an effect on your outcome later on during analysis so make sure that what you do translates accurately into meaningful insights rather than incorrect conclusions due to mistake manipulation nof the data set
- Analyze: Start by looking through descriptive statistics such as aggregate values (mean/median) or frequencies (counts/percentages) for each field or combination of fields from which one can draw valid insights from. You might then wish to tackle deeper deeper analytical questions based off a few hypotheses such as correlations between two variables etc.. Make sure also at all times verify assumptions with evidence from provided datasets
- Report: Prepare a summary report including any additional analysis recommendations based upon findngs drawn from both descriptive statistics as well as deeper analytic work done regarding potential correlation between variables
- Using the insurance producer dataset and geographic data, create an interactive map of Illinois to visualize where the most insurance producers are located with detailed country, state and city data.
Generate a report to provide insights on which Insurance Producers have concentrated Lines of Authority in a certain area or across multiple states in order to identify any emerging trends in insurance markets or areas in need of additional coverage options
Leverage AI algorithms and machine learning techniques to create a predictive model that predicts which Lines of Authority will be more successful for producers operating in certain geographical areas based on their past performance, demographic information etc
If you use this dataset in your research, please credit the original authors. Data Source
License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.
File: doi-insurance-producers-1.csv | Column name | Description | |:-------------------------------|:---------------------------------------------------------------| | LAST_NAME_OR_BUSINESS_NAME | Last name or business name of the insurance producer. (String) | | FIRST_NAME | First name of the insurance producer. (String) | | MLG_ADDRESS1 | Mailing address line 1 of the insurance producer. (String) | | MLG_ADDRESS2 | Mailing address line 2 of the insurance producer. (String) | | MAILING_CITY | City of the insurance produc...
Facebook
Twitterhttps://spdx.org/licenses/etalab-2.0.htmlhttps://spdx.org/licenses/etalab-2.0.html
These datasets concern unstructured data (articles) from news items detected by an event-based surveillance system; PADI-Web, between 2022 and 2023. Collected articles were manually annotated by relevance for epidemic intelligence purposes with the help of two epidemiologists Extracted data include relevant articles (with two possible labels; epidemiological events or general information) and irrelevant information regarding three different diseases: Avian Influenza (AI), African Swine Fever (ASF) and West Nile Virus disease(WNV). This database is extensive as it deals with different types of diseases (zoonotic, cross-border and vectorial disease ) and can be used to train or evaluate classification approaches to automatically identify written text on these diseases events and classify them by relevance. The structure of the dataset is as follow: Alert_id: Article identifier. Note that each article has a unique ID, if an article reports multiple events, it is duplicated and each line represent one event. Title: Article's title given by the news outlet. hsource: URL of the news outlet reporting the article. Source: Name of the news outlet reporting the article. url: URL information of the article reporting the considered event. Note that multiple articles can report same event. Issue_date: Date of the article publication Country: Name of the country where the event happened Place_name: Name of the administration, city or district where the event happened, if none of these is mentionned in the text, the country's name is reported. Administrative_division: The administrative level at which the information is reported (country, department, city...) Disease_name: Name of the disease that is reported in the article Species_name: Name of the affected host that is reported in the article Manualclass: Manual classification (Relevant or Irrelevant) Lat: Place_name lattitude coordinates Lon: Place_name longitude coordinates
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Introduction UK Power Network maintains the 132kV voltage level network and below. An important part of the distribution network is distributing this electricity across our regions through circuits. Electricity enters our network through Super Grid Transformers at substations shared with National Grid we call Grid Supply Points. It is then sent at across our 132 kV Circuits towards our grid substations and primary substations. From there, electricity is distributed along the 33 kV circuits to bring it closer to the home. These circuits can be viewed on the single line diagrams in our Long-Term Development Statements (LTDS) and the underlying data is then found in the LTDS tables.
This dataset provides half-hourly current and power flow data across these named circuits from 2021 through to the previous month across our Eastern Power Networks (EPN) license area. The data is aligned with the same naming convention as the LTDS for improved interoperability.
Care is taken to protect the private affairs of companies connected to the 33 kV network, resulting in the redaction of certain circuits. Where redacted, we provide monthly statistics to continue to add value where possible. Where monthly statistics exist but half-hourly is absent, this data has been redacted.
To find which circuit you are looking for, use the ‘ltds_line_name’ that can be cross referenced in the 33kV Circuits Monthly Data, which describes by month what circuits were triaged, if they could be made public, and what the monthly statistics are of that site.
If you want to download all this data, it is perhaps more convenient from our public sharepoint: Sharepoint
This dataset is part of a larger endeavour to share more operational data on UK Power Networks assets. Please visit our Network Operational Data Dashboard for more operational datasets.
Methodological Approach
The dataset is not derived, it is the measurements from our network stored in our historian.
The measurement devices are taken from current transformers attached to the cable at the circuit breaker, and power is derived combining this with the data from voltage transformers physically attached to the busbar. The historian stores datasets based on a report-by-exception process, such that a certain deviation from the present value must be reached before logging a point measurement to the historian. We extract the data following a 30-min time weighted averaging method to get half-hourly values. Where there are no measurements logged in the period, the data provided is blank; due to the report-by-exception process, it may be appropriate to forward fill this data for shorter gaps.
We developed a data redactions process to protect the privacy or companies according to the Utilities Act 2000 section 105.1.b, which requires UK Power Networks to not disclose information relating to the affairs of a business. For this reason, where the demand of a private customer is derivable from our data and that data is not already public information (e.g., data provided via Elexon on the Balancing Mechanism), we redact the half-hourly time series, and provide only the monthly averages. This redaction process considers the correlation of all the data, of only corresponding periods where the customer is active, the first order difference of all the data, and the first order difference of only corresponding periods where the customer is active. Should any of these four tests have a high linear correlation, the data is deemed redacted. This process is not simply applied to only the circuit of the customer, but of the surrounding circuits that would also reveal the signal of that customer.
The directionality of the data is not consistent within this dataset. Where directionality was ascertainable, we arrange the power data in the direction of the LTDS "from node" to the LTDS "to node". Measurements of current do not indicate directionality and are instead positive regardless of direction. In some circumstances, the polarity can be negative, and depends on the data commissioner's decision on what the operators in the control room might find most helpful in ensuring reliable and secure network operation.
Quality Control Statement
The data is provided "as is".
In the design and delivery process adopted by the DSO, customer feedback and guidance is considered at each phase of the project. One of the earliest steers was that raw data was preferable. This means that we do not perform prior quality control screening to our raw network data. The result of this decision is that network rearrangements and other periods of non-intact running of the network are present throughout the dataset, which has the potential to misconstrue the true utilisation of the network, which is determined regulatorily by considering only by in-tact running arrangements. Therefore, taking the maximum or minimum of these measurements are not a reliable method of correctly ascertaining the true utilisation. This does have the intended added benefit of giving a realistic view of how the network was operated. The critical feedback was that our customers have a desire to understand what would have been the impact to them under real operational conditions. As such, this dataset offers unique insight into that.
Assurance Statement
Creating this dataset involved a lot of human data imputation. At UK Power Networks, we have differing software to run the network operationally (ADMS) and to plan and study the network (PowerFactory). The measurement devices are intended to primarily inform the network operators of the real time condition of the network, and importantly, the network drawings visible in the LTDS are a planning approach, which differs to the operational. To compile this dataset, we made the union between the two modes of operating manually. A team of data scientists, data engineers, and power system engineers manually identified the LTDS circuit from the single line diagram, identified the line name from LTDS Table 2a/b, then identified the same circuit in ADMS to identify the measurement data tags. This was then manually inputted to a spreadsheet. Any influential customers to that circuit were noted using ADMS and the single line diagrams. From there, a python code is used to perform the triage and compilation of the datasets. There is potential for human error during the manual data processing. These issues can include missing circuits, incorrectly labelled circuits, incorrectly identified measurement data tags, incorrectly interpreted directionality. Whilst care has been taken to minimise the risk of these issues, they may persist in the provided dataset. Any uncertain behaviour observed by using this data should be reported to allow us to correct as fast as possible.
Additional InformationDefinitions of key terms related to this dataset can be found in the Open Data Portal Glossary. Download dataset information: Metadata (JSON) We would be grateful if you find this dataset useful to submit a reuse case study to tell us what you did and how you used it. This enables us to drive our direction and gain better understanding for how we improve our data offering in the future. Click here for more information: Open Data Portal Reuses — UK Power Networks To view this data please register and login.
Facebook
TwitterILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). In ImageNet, we aim to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated. In its completion, we hope ImageNet will offer tens of millions of cleanly sorted images for most of the concepts in the WordNet hierarchy.
The test split contains 100K images but no labels because no labels have been publicly released. We provide support for the test split from 2012 with the minor patch released on October 10, 2019. In order to manually download this data, a user must perform the following operations:
The resulting tar-ball may then be processed by TFDS.
To assess the accuracy of a model on the ImageNet test split, one must run inference on all images in the split, export those results to a text file that must be uploaded to the ImageNet evaluation server. The maintainers of the ImageNet evaluation server permits a single user to submit up to 2 submissions per week in order to prevent overfitting.
To evaluate the accuracy on the test split, one must first create an account at image-net.org. This account must be approved by the site administrator. After the account is created, one can submit the results to the test server at https://image-net.org/challenges/LSVRC/eval_server.php The submission consists of several ASCII text files corresponding to multiple tasks. The task of interest is "Classification submission (top-5 cls error)". A sample of an exported text file looks like the following:
771 778 794 387 650
363 691 764 923 427
737 369 430 531 124
755 930 755 59 168
The export format is described in full in "readme.txt" within the 2013 development kit available here: https://image-net.org/data/ILSVRC/2013/ILSVRC2013_devkit.tgz Please see the section entitled "3.3 CLS-LOC submission format". Briefly, the format of the text file is 100,000 lines corresponding to each image in the test split. Each line of integers correspond to the rank-ordered, top 5 predictions for each test image. The integers are 1-indexed corresponding to the line number in the corresponding labels file. See labels.txt.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('imagenet2012', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
https://storage.googleapis.com/tfds-data/visualization/fig/imagenet2012-5.1.0.png" alt="Visualization" width="500px">
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary
This dataset contains bi-temporal images pairs with associated labels of change or no-change describing whether there was a change in surface features between the two images acquired over the same location at two different times. This dataset contains images from four different instruments orbiting three different planets, each of which contains four representations of the bi-temporal image pair: composite grayscale, absolute difference, signed difference, and autoencoder bottleneck representations (described in detail in [1]). We also include the grayscale image tiles these representational datasets were created from. All datasets contain 100x100 images tiles that were cropped from larger images. We describe each below.
Contents
hirise_rsl.zip : all subdirectories contain change and no-change examples represented as composite grayscale, absolute difference, signed difference, and autoencoder bottleneck from a before and after HiRISE image of recurring slope lineae on Mars. Subdirectory names are garni_XXXXXX_YYYYYY where Garni is the name of the crater on Mars shown in the images, XXXXXX is the HiRISE image ID of the before image, and YYYYYY is the HiRISE image ID of the after image. The "*_lcn" ending on some directories indicates that local contrast normalization was applied. The "*_gs_illum" and "*_gs_slope" directories contain composite grayscale representation with a third band that contains the difference between illumination (illum) and slope values at the same locations. Images with line endings _vflip, _hflip, _rot90, _rot180, and _rot270 were the result of vertical flips, horizontal flips, 90-deg rotations, 180-deg rotations, and 270-deg rotations of the image with the corresponding prefix.
ctx_impacts.zip : all subdirectories contain change and no-change examples represented as composite grayscale, absolute difference, signed difference, and autoencoder bottleneck from a before and after CTX image of meteorite impacts on Mars. The prefix in each image name corresponds to the image pair described in the Appendix in [1].
lroc_impacts.zip : all subdirectories contain change and no-change examples represented as composite grayscale, absolute difference, signed difference, and autoencoder bottleneck from a before and after LROC image of meteorite and spacecraft landing impacts on the Moon. Filenames correspond to pair names provided in the Appendix in [1].
planet_misc.zip : all subdirectories contain change and no-change examples represented as composite grayscale, absolute difference, signed difference, and autoencoder bottleneck from a before and after PlanetScope image of miscellaneous processes on Earth. Filenames correspond to pair names provided in the Appendix in [1].
*_before_after_grayscale.zip : before and after grayscale tiles used to create image representations in above directories (indicated with _before and _after suffix in filenames). Images that contain "_0_" in the filename have the label no-change and images with "_1_" in the filename have the label change.
[1] Kerner et al. (2019) Deep Learning Methods Toward Generalized Change Detection on Planetary Surfaces. In review at Journal of Selected Topics in Earth Observations and Remote Sensing.
Attribution
If you use this dataset in your own work, please cite this DOI: 10.5281/zenodo.2373798 as well as the paper below:
Kerner, H. R., Wagstaff, K. L., Bue, B. D., Gray, P. C., Bell, J. F., & Amor, H. B. (2019). Toward generalized change detection on planetary surfaces with convolutional autoencoders and transfer learning. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(10), 3900-3918.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset consists of multiple files which contain bug prediction training data.
The entries in the dataset are JavaScript functions either being buggy or non-buggy. Bug related information was obtained from the project EsLint contained in BugsJS (https://github.com/BugsJS/eslint). The buggy instances were collected throughout the lifetime of the project, however we added non-buggy entries from the latest version which is tagged as fix (entries which were previously included as buggy were not included as non-buggy later on).
The dataset is based on hybrid call graphs which are constructed by https://github.com/sed-szeged/hcg-js-framework. The result of this tool is a call graph where the edges are associated with a confidence level which shows how likely the given edge is a valid call edge.
We used different threshold values from which we considered the edges to be valid. The following threshold values were used:
The prefix in the dataset file names are coming from the used threshold. The the datasets include coupling metrics NII (Nubmer of Incoming Invocations) and NOI (Number of Outgoing Invocations) which were calculated by a static source code analyzer called SourceMeter. Hybrid counterparts of these metrics (HNII and HNOI) are based on the given threshold values.
There are four variants for all of these datasets:
Static source code metrics which are contained in all dataset are the following:
Facebook
TwitterA Certificate of Convenience and Necessity (CCN) is issued by the Public Utility Commission of Texas (PUCT), and authorizes a utility to provide water and/or sewer service to a specific service area. The CCN obligates the water or sewer retail public utility to provide continuous and adequate service to every customer who requests service in that area. The maps and digital data provided in the Water and Sewer CCN Viewer delineate the official CCN service areas and CCN facility lines issued by the PUCT and its predecessor agencies. This dataset is a Texas statewide polygon layer of sewer CCN service areas. The CCNs were digitized from Texas Department of Transportation (TxDOT) county mylar maps. The mylar maps were the base maps on which the CCNs were originally drawn and maintained. CCNs are currently created and maintained using digitizing methods, coordinate geography or imported from digital files submitted by the applicant. TxDOT digital county urban road files are used as the base maps on which the CCNs are geo-referenced. It is best to view the sewer CCN service area data in conjunction with the sewer CCN facility line data, since these two layers together represent all of the retail public sewer utilities in Texas.*Important Notes: The CCN spatial dataset and metadata were last updated on: January 29, 2024The official state-wide CCN spatial dataset includes all types of CCN services areas: water and sewer CCN service areas; water and sewer CCN facility lines. This CCN spatial dataset is updated on a quarterly, or as needed basis using Geographic Information System (GIS) software called ArcGIS 10.8.2.The complete state-wide CCN spatial dataset is available for download from the following website: http://www.puc.texas.gov/industry/water/utilities/gis.aspxThe Water and Sewer CCN Viewer may be accessed from the following web site: http://www.puc.texas.gov/industry/water/utilities/map.htmlIf you have questions about this CCN spatial dataset or about CCN mapping requirements, please e-mail CCN Mapping Staff: water@puc.texas.govTYPE - Indicates whether a CCN is considered a water or a sewer system. If the CCN number begins with a '"1", the CCN is considered a water system (utility). If a CCN number begins with a "2", the CCN is considered a sewer system (utility).CCN_NO - A unique five-digit number assigned to each CCN when it is created and approved by the Commission. *CCN number starting with an ‘N’ indicates an exempt utility.UTILITY - The name of the utility which owns the CCN.COUNTY - The name(s) of the county(ies) in which the CCN exist.CCN_TYPE –One of three types:Bounded Service Area: A certificated service area with closed boundaries that often follow identifiable physical and cultural features such as roads, rivers, streams and political boundaries. Facilities +200 Feet: A certificated service area represented by lines. They include a buffer of a specified number of feet (usually 200 feet). The lines normally follow along roads and may or may not correspond to distribution lines or facilities in the ground.Facilities Only: A certificated service area represented by lines. They are granted for a "point of use" that covers only the customer connections at the time the CCN is granted. Facility only service lines normally follow along roads and may or may not correspond to distribution lines or facilities in the ground.STATUS – For pending dockets check the PUC Interchange Filing Search
Facebook
TwitterBy Homeland Infrastructure Foundation [source]
Within this dataset, users can find numerous attributes that provide insight into various aspects of shoreline construction lines. The Category_o field categorizes these structures based on certain characteristics or purposes they serve. Additionally, each object in the dataset possesses a unique name or identifier represented by the Object_Nam column.
Another crucial piece of information captured in this dataset is the status of each shoreline construction line. The Status field indicates whether a particular structure is currently active or inactive. This helps users understand if it still serves its intended purpose or has been decommissioned.
Furthermore, the dataset includes data pertaining to multiple water levels associated with different shoreline construction lines. This information can be found in the Water_Leve column and provides relevant context for understanding how these artificial coastlines interact with various water bodies.
To aid cartographic representations and proper utilization of this data source for mapping purposes at different scales, there is also an attribute called Scale_Mini. This value denotes the minimum scale necessary to visualize a specific shoreline construction line accurately.
Data sources are important for reproducibility and quality assurance purposes in any GIS analysis project; hence identifying who provided and contributed to collecting this data can be critical in assessing its reliability. In this regard, individuals or organizations responsible for providing source data are specified in the column labeled Source_Ind.
Accompanying descriptive information about each source used to create these shoreline constructions lines can be found in the Source_D_1 field. This supplemental information provides additional context and details about the data's origin or collection methodology.
The dataset also includes a numerical attribute called SHAPE_Leng, representing the length of each shoreline construction line. This information complements the geographic and spatial attributes associated with these structures.
Understanding the Categories:
- The Category_o column classifies each shoreline construction line into different categories. This can range from seawalls and breakwaters to jetties and groins.
- Use this information to identify specific types of shoreline constructions based on your analysis needs.
Identifying Specific Objects:
- The Object_Nam column provides unique names or identifiers for each shoreline construction line.
- These identifiers help differentiate between different segments of construction lines in a region.
Determining Status:
- The Status column indicates whether a shoreline construction line is active or inactive.
- Active constructions are still in use and may be actively maintained or monitored.
- Inactive constructions are no longer operational or may have been demolished.
Analyzing Water Levels:
- The Water_Leve column describes the water level at which each shoreline construction line is located.
- Different levels may impact the suitability or effectiveness of these structures based on tidal changes or flood zones.
Exploring Additional Information:
- The Informatio column contains additional details about each shoreline construction line.
- This can include various attributes such as materials used, design specifications, ownership details, etc.
Determining Minimum Visible Scale:
-- The Scale_Mini column specifies the minimum scale at which you can observe the coastline's man-made structures clearly.Verifying Data Sources: -- In order to understand data reliability and credibility for further analysis,Source_Ind, Source_D_1, SHAPE_Leng,and Source_Dat columns provide information about the individual or organization that provided the source data and length, and date of the source data used to create the shoreline construction lines.
Utilize this dataset to perform various analyses related to shorelines, coastal developments, navigational channels, and impacts of man-made structures on marine ecosystems. The combination of categories, object names, status, water levels, additional information, minimum visible scale and reliable source information offers a comprehensive understanding of shoreline constructions across different regions.
Remember to refer back to the dataset documentation for any specific deta...