7 datasets found
  1. D

    Integrated Dynamic Transit Operations (IDTO) Impact Assessment (IA)

    • data.transportation.gov
    • data.virginia.gov
    • +1more
    application/rdfxml +5
    Updated Sep 27, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Department of Transportation’s (USDOT) Intelligent Transportation Systems (ITS) Joint Program Office (JPO) -- Recommended citation: "U.S. Department of Transportation Intelligent Transportation Systems Joint Program Office. (2015). Integrated Dynamic Transit Operations (IDTO) Impact Assessment (IA). [Dataset]. Provided by ITS DataHub through Data.transportation.gov. Accessed YYYY-MM-DD from http://doi.org/10.21949/1504483" (2017). Integrated Dynamic Transit Operations (IDTO) Impact Assessment (IA) [Dataset]. https://data.transportation.gov/Automobiles/Integrated-Dynamic-Transit-Operations-IDTO-Impact-/as63-iaxk
    Explore at:
    application/rssxml, application/rdfxml, tsv, csv, json, xmlAvailable download formats
    Dataset updated
    Sep 27, 2017
    Dataset authored and provided by
    U.S. Department of Transportation’s (USDOT) Intelligent Transportation Systems (ITS) Joint Program Office (JPO) -- Recommended citation: "U.S. Department of Transportation Intelligent Transportation Systems Joint Program Office. (2015). Integrated Dynamic Transit Operations (IDTO) Impact Assessment (IA). [Dataset]. Provided by ITS DataHub through Data.transportation.gov. Accessed YYYY-MM-DD from http://doi.org/10.21949/1504483"
    License

    Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
    License information was derived automatically

    Description

    The majority of this set of data files was acquired under USDOT FHWA cooperative agreement DTFH61-12-D-00040 by Battelle and transferred to Volpe, The National Transportation Systems Center. Battelle and the Volpe Center coordinated to determine and attain the data necessary for the IDTO Impact Assessment. The Volpe Center was also under a cooperative agreement with USDOT FHWA (DTFH61-13-V-00008). Other data files within the set were generated by the Volpe Center for the purposes of the assessment.

    This legacy dataset was created before data.transportation.gov and is only currently available via the attached file(s). Please contact the dataset owner if there is a need for users to work with this data using the data.transportation.gov analysis features (online viewing, API, graphing, etc.) and the USDOT will consider modifying the dataset to fully integrate in data.transportation.gov.

  2. Film Circulation dataset

    • zenodo.org
    • data.niaid.nih.gov
    bin, csv, png
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Skadi Loist; Skadi Loist; Evgenia (Zhenya) Samoilova; Evgenia (Zhenya) Samoilova (2024). Film Circulation dataset [Dataset]. http://doi.org/10.5281/zenodo.7887672
    Explore at:
    csv, png, binAvailable download formats
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Skadi Loist; Skadi Loist; Evgenia (Zhenya) Samoilova; Evgenia (Zhenya) Samoilova
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Complete dataset of “Film Circulation on the International Film Festival Network and the Impact on Global Film Culture”

    A peer-reviewed data paper for this dataset is in review to be published in NECSUS_European Journal of Media Studies - an open access journal aiming at enhancing data transparency and reusability, and will be available from https://necsus-ejms.org/ and https://mediarep.org

    Please cite this when using the dataset.


    Detailed description of the dataset:

    1 Film Dataset: Festival Programs

    The Film Dataset consists a data scheme image file, a codebook and two dataset tables in csv format.

    The codebook (csv file “1_codebook_film-dataset_festival-program”) offers a detailed description of all variables within the Film Dataset. Along with the definition of variables it lists explanations for the units of measurement, data sources, coding and information on missing data.

    The csv file “1_film-dataset_festival-program_long” comprises a dataset of all films and the festivals, festival sections, and the year of the festival edition that they were sampled from. The dataset is structured in the long format, i.e. the same film can appear in several rows when it appeared in more than one sample festival. However, films are identifiable via their unique ID.

    The csv file “1_film-dataset_festival-program_wide” consists of the dataset listing only unique films (n=9,348). The dataset is in the wide format, i.e. each row corresponds to a unique film, identifiable via its unique ID. For easy analysis, and since the overlap is only six percent, in this dataset the variable sample festival (fest) corresponds to the first sample festival where the film appeared. For instance, if a film was first shown at Berlinale (in February) and then at Frameline (in June of the same year), the sample festival will list “Berlinale”. This file includes information on unique and IMDb IDs, the film title, production year, length, categorization in length, production countries, regional attribution, director names, genre attribution, the festival, festival section and festival edition the film was sampled from, and information whether there is festival run information available through the IMDb data.


    2 Survey Dataset

    The Survey Dataset consists of a data scheme image file, a codebook and two dataset tables in csv format.

    The codebook “2_codebook_survey-dataset” includes coding information for both survey datasets. It lists the definition of the variables or survey questions (corresponding to Samoilova/Loist 2019), units of measurement, data source, variable type, range and coding, and information on missing data.

    The csv file “2_survey-dataset_long-festivals_shared-consent” consists of a subset (n=161) of the original survey dataset (n=454), where respondents provided festival run data for films (n=206) and gave consent to share their data for research purposes. This dataset consists of the festival data in a long format, so that each row corresponds to the festival appearance of a film.

    The csv file “2_survey-dataset_wide-no-festivals_shared-consent” consists of a subset (n=372) of the original dataset (n=454) of survey responses corresponding to sample films. It includes data only for those films for which respondents provided consent to share their data for research purposes. This dataset is shown in wide format of the survey data, i.e. information for each response corresponding to a film is listed in one row. This includes data on film IDs, film title, survey questions regarding completeness and availability of provided information, information on number of festival screenings, screening fees, budgets, marketing costs, market screenings, and distribution. As the file name suggests, no data on festival screenings is included in the wide format dataset.


    3 IMDb & Scripts

    The IMDb dataset consists of a data scheme image file, one codebook and eight datasets, all in csv format. It also includes the R scripts that we used for scraping and matching.

    The codebook “3_codebook_imdb-dataset” includes information for all IMDb datasets. This includes ID information and their data source, coding and value ranges, and information on missing data.

    The csv file “3_imdb-dataset_aka-titles_long” contains film title data in different languages scraped from IMDb in a long format, i.e. each row corresponds to a title in a given language.

    The csv file “3_imdb-dataset_awards_long” contains film award data in a long format, i.e. each row corresponds to an award of a given film.

    The csv file “3_imdb-dataset_companies_long” contains data on production and distribution companies of films. The dataset is in a long format, so that each row corresponds to a particular company of a particular film.

    The csv file “3_imdb-dataset_crew_long” contains data on names and roles of crew members in a long format, i.e. each row corresponds to each crew member. The file also contains binary gender assigned to directors based on their first names using the GenderizeR application.

    The csv file “3_imdb-dataset_festival-runs_long” contains festival run data scraped from IMDb in a long format, i.e. each row corresponds to the festival appearance of a given film. The dataset does not include each film screening, but the first screening of a film at a festival within a given year. The data includes festival runs up to 2019.

    The csv file “3_imdb-dataset_general-info_wide” contains general information about films such as genre as defined by IMDb, languages in which a film was shown, ratings, and budget. The dataset is in wide format, so that each row corresponds to a unique film.

    The csv file “3_imdb-dataset_release-info_long” contains data about non-festival release (e.g., theatrical, digital, tv, dvd/blueray). The dataset is in a long format, so that each row corresponds to a particular release of a particular film.

    The csv file “3_imdb-dataset_websites_long” contains data on available websites (official websites, miscellaneous, photos, video clips). The dataset is in a long format, so that each row corresponds to a website of a particular film.

    The dataset includes 8 text files containing the script for webscraping. They were written using the R-3.6.3 version for Windows.

    The R script “r_1_unite_data” demonstrates the structure of the dataset, that we use in the following steps to identify, scrape, and match the film data.

    The R script “r_2_scrape_matches” reads in the dataset with the film characteristics described in the “r_1_unite_data” and uses various R packages to create a search URL for each film from the core dataset on the IMDb website. The script attempts to match each film from the core dataset to IMDb records by first conducting an advanced search based on the movie title and year, and then potentially using an alternative title and a basic search if no matches are found in the advanced search. The script scrapes the title, release year, directors, running time, genre, and IMDb film URL from the first page of the suggested records from the IMDb website. The script then defines a loop that matches (including matching scores) each film in the core dataset with suggested films on the IMDb search page. Matching was done using data on directors, production year (+/- one year), and title, a fuzzy matching approach with two methods: “cosine” and “osa.” where the cosine similarity is used to match titles with a high degree of similarity, and the OSA algorithm is used to match titles that may have typos or minor variations.

    The script “r_3_matching” creates a dataset with the matches for a manual check. Each pair of films (original film from the core dataset and the suggested match from the IMDb website was categorized in the following five categories: a) 100% match: perfect match on title, year, and director; b) likely good match; c) maybe match; d) unlikely match; and e) no match). The script also checks for possible doubles in the dataset and identifies them for a manual check.

    The script “r_4_scraping_functions” creates a function for scraping the data from the identified matches (based on the scripts described above and manually checked). These functions are used for scraping the data in the next script.

    The script “r_5a_extracting_info_sample” uses the function defined in the “r_4_scraping_functions”, in order to scrape the IMDb data for the identified matches. This script does that for the first 100 films, to check, if everything works. Scraping for the entire dataset took a few hours. Therefore, a test with a subsample of 100 films is advisable.

    The script “r_5b_extracting_info_all” extracts the data for the entire dataset of the identified matches.

    The script “r_5c_extracting_info_skipped” checks the films with missing data (where data was not scraped) and tried to extract data one more time to make sure that the errors were not caused by disruptions in the internet connection or other technical issues.

    The script “r_check_logs” is used for troubleshooting and tracking the progress of all of the R scripts used. It gives information on the amount of missing values and errors.


    4 Festival Library Dataset

    The Festival Library Dataset consists of a data scheme image file, one codebook and one dataset, all in csv format.

    The codebook (csv file “4_codebook_festival-library_dataset”) offers a detailed description of all variables within the Library Dataset. It lists the definition of variables, such as location and festival name, and festival categories,

  3. f

    The Guardian Reading Dataset

    • figshare.com
    csv
    Updated Jan 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Frans Van der Sluis; Egon L. van den Broek (2025). The Guardian Reading Dataset [Dataset]. http://doi.org/10.6084/m9.figshare.27057958.v1
    Explore at:
    csvAvailable download formats
    Dataset updated
    Jan 30, 2025
    Dataset provided by
    figshare
    Authors
    Frans Van der Sluis; Egon L. van den Broek
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository hosts the The Guardian Reading Dataset, designed to explore readers' experiences with textual complexity, comprehensibility, and interest. The dataset captures detailed subjective and objective measures of readers' interactions with a selection of articles from The Guardian, providing granular insights into how textual features impact reading engagement.DescriptionThis dataset includes data from 30 readers who participated in 540 reading sessions. Each participant evaluated 18 articles sampled at three levels of textual complexity (low, medium, high), determined by a readability algorithm (Van der Sluis, 2014). The data captures subjective appraisals of complexity, comprehensibility, and interest, alongside eye-tracking metrics to provide an objective view of readers' processing difficulty and engagement with the text.Data File and StructureData is stored in .csv format at the reading session level, with each row corresponding to a unique reading session by a participant. Participant identifiers, demographic scales, and trait measures were dropped as part of anynomisation. Key columns are:stimulus: Identifier for each article.appraised_complexity: Participant’s perceived complexity of the article.appraised_comprehensibility: Participant’s perceived comprehensibility of the article.processing_fluency: Combined score of appraised complexity and appraised comprehensibility.interest: Participant’s interest rating for the article.topical_familiarity: Participant’s familiarity with the article’s topic.familiarity_group:Grouping of articles into balanced blocks (either low, median, or high familiarity).pupil_diameter: Average pupil diameter across the reading session, reflecting cognitive load during reading.pupil_corrected: Baseline-corrected pupil diameter per reading session (details on preprocessing and correction can be found in Van der Sluis et al., 2023).novelty_comprehensibility_scale_*: Semantic differentials, rating the content as complex – simple (57), not familiar to me – very familiar to me, easy to read – difficult to read(59), easy to understand – hard to understand (60), comprehensible – incomprehensible (61), coherent – incoherent (62), interesting – uninteresting (63), boring – exciting (64). Of these, 59, 60, 61, 62, and 63 were reverse scored in the resulting scales.fam_answer_*: Participants' topical familiarity rating for five topics per article. Note that each column covers different topics depending on the article.Data is stored in .db format at the stimulus (text) level, with each row corresponding to a unique text. In addition to averaged measurements aggregated per article, key columns are:text: Article content excerpt (first 50 and last 50 words of excerpt presented to study participants).url: Source URL for full text access.PurposeThe purpose of this dataset is to facilitate the analysis of human responses to differences in textual complexity, with a focus on understanding how readers' interest varies with different complexity levels. The controlled conditions and validated data in this dataset make it ideal for assessing the accuracy and applicability of models of textual complexity, ensuring that the findings are both reliable and relevant to actual readers' perceptions and experiences.LicensingThe dataset contains textual excerpts and metadata from The Guardian, shared under The Guardian’s open license terms (https:/www.theguardian.com/info/2022/nov/01/open-licence-terms). Full-text sharing is restricted, but excerpts of up till 100 words may be used with proper attribution.ReferencesVan der Sluis, F., & van den Broek, E. L. (2023). Feedback beyond accuracy: Using eye-tracking to detect comprehensibility and interest during reading. Journal of the Association for Information Science and Technology, 74(1): 3–16. https://doi.org/10.1002/asi.24657Van der Sluis, F., van den Broek, E. L., Glassey, R. J., van Dijk, E. M. A. G., & de Jong, F. M. G. (2014). When complexity becomes interesting. Journal of the American Society for Information Science and Technology, 65(7): 1478–1500. https://doi.org/10.1002/asi.23095

  4. H

    Evidence for Resilient Livestock (ERL): Livestock Feed Nutritional...

    • dataverse.harvard.edu
    Updated Feb 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peter Richard Steward; Namita Joshi; Todd Stuart Rosenstock (2025). Evidence for Resilient Livestock (ERL): Livestock Feed Nutritional Meta-dataset from Africa [Dataset]. http://doi.org/10.7910/DVN/JIU3OK
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 3, 2025
    Dataset provided by
    Harvard Dataverse
    Authors
    Peter Richard Steward; Namita Joshi; Todd Stuart Rosenstock
    License

    https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/JIU3OKhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/JIU3OK

    Time period covered
    1970 - 2018
    Description

    Purpose, Nature, and Scope: This data collection focuses on extracting and organizing data on animal feed nutritional composition and digestibility as part of the Evidence for Resilient Agriculture (ERA) initiative. The dataset serves as a foundational resource for understanding livestock feed practices and their implications for productivity, sustainability, and environmental impact. While no specific research questions have been addressed yet, the dataset provides a critical input for ongoing and future research. Special Characteristics: The dataset includes: Nutritional composition data (e.g., protein, fiber, and energy content) for various feed types. Digestibility data (e.g., dry matter and energy digestibility) relevant to livestock systems. Contextual metadata, such as livestock species, feed types, and geographic locations. The data is curated within the ERA data model, using a controlled vocabulary for consistency and interoperability. Applications: This dataset can be shared with collaborators who have developed emissions calculators requiring detailed information about the nutritional composition and digestibility of livestock diets. These tools use such data to estimate emissions and develop mitigation strategies by analyzing or modifying animal diets. More information here: https://eragriculture.github.io/ERL/ERL_feed_data.html Methodology: The dataset was created using the ERA data model, leveraging a structured Excel-based extraction template (Skinny Cow template) to systematically extract and compile data from published studies. The focus was on capturing nutritional composition, digestibility metrics, and associated metadata (e.g., livestock types, feed practices, and geographic details). For more details on how the dataset was created, see the ERL Feed Data documentation and the Guide to Livestock Data Analysis in the ERA Dataset.

  5. G

    Data from: Low-Temperature Geothermal Geospatial Datasets: An Example from...

    • gdr.openei.org
    • data.openei.org
    • +2more
    Updated Feb 6, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Estefanny Davalos Elizondo; Amanda Kolker; Ian Warren; Estefanny Davalos Elizondo; Amanda Kolker; Ian Warren (2023). Low-Temperature Geothermal Geospatial Datasets: An Example from Alaska [Dataset]. http://doi.org/10.15121/1997233
    Explore at:
    Dataset updated
    Feb 6, 2023
    Dataset provided by
    Office of Energy Efficiency and Renewable Energyhttp://energy.gov/eere
    Geothermal Data Repository
    National Renewable Energy Laboratory
    Authors
    Estefanny Davalos Elizondo; Amanda Kolker; Ian Warren; Estefanny Davalos Elizondo; Amanda Kolker; Ian Warren
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Alaska
    Description

    This project is a component of a broader effort focused on geothermal heating and cooling (GHC) with the aim of illustrating the numerous benefits of incorporating GHC and geothermal heat exchange (GHX) into community energy planning and national decarbonization strategies. To better assist private sector investment, it is currently necessary to define and assess the potential of low-temperature geothermal resources. For shallow GHC/GHX fields, there is no formal compilation of subsurface characteristics shared among industry practitioners that can improve system design and operations. Alaska is specifically noted in this work, because heretofore, it has not received a similar focus in geothermal potential evaluations as the contiguous United States. The methodology consists of leveraging relevant data to generate a baseline geospatial dataset of low-temperature resources (less than 150 degrees C) to compare and analyze information accessible to anyone trying to understand the potential of GHC/GHX and small-scale low-temperature geothermal power in Alaska (e.g., energy modelers, communities, planners, and policymakers). Importantly, this project identifies data related to (1) the evaluation of GHC/GHX in the shallow subsurface, and (2) the evaluation of low-temperature geothermal resource availability. Additionally, data is being compiled to assess repurposing of oil and gas wells to contribute co-produced fluids toward the geothermal direct use and heating and cooling resource potential. In this work we identified new data from three different datasets of isolated geothermal systems in Alaska and bottom-hole temperature data from oil and gas wells that can be leveraged for evaluation of low-temperature geothermal resource potential. The goal of this project is to facilitate future deployment of GHC/GHX analysis and community-led programs and update the low-temperature geothermal resources assessment of Alaska. A better understanding of shallow potential for GHX will improve design and operations of highly efficient GHC systems. The deployment and impact that can be achieved for low-temperature geothermal resources will contribute to decarbonization goals and facilitate widespread electrification by shaving and shifting grid loads.

    Most of the data uses WGS84 coordinate system. However, each dataset come from different sources and has a metadata file with the original coordinate system.

  6. f

    Data from: S1 Dataset -

    • plos.figshare.com
    • figshare.com
    xls
    Updated Oct 10, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhilong Qin; Tao Liu; Xingjin Yu; Lin Yang (2023). S1 Dataset - [Dataset]. http://doi.org/10.1371/journal.pone.0292158.s001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 10, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Zhilong Qin; Tao Liu; Xingjin Yu; Lin Yang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Market liquidity can reflect whether financial market conditions are favorable and is the primary concern for investors when making investment decisions. Therefore, investors’ psychological perception and confidence in the quality of products (assets) are particularly important. Using 264 of China’s online loan platforms from August 2017 to November 2018, we investigate the impact of the negative psychological perceptions of investors on platform liquidity. The empirical results suggest that the negative psychological perceptions of investors reduce platform liquidity and increase platform liquidity risk. Using the Baidu Search Index to measure investor sentiment, we find that the negative psychological perceptions of investors affect platform liquidity by affecting investor sentiment, which provides a good channel for explaining the main conclusions. Heterogeneity analysis shows that the impact of the negative psychological perceptions of investors on platform liquidity is smaller in high-quality platforms with higher market share and higher registered capital. Meanwhile, we also find that the impact of negative psychological perceptions of investors is greater in private platforms, after the rectification policy, with positive net inflow, and in first- and second-tier cities and coastal cities. Precautionary financial regulatory policies are necessary, not punishment ex post. The research findings of this article can assist investors, platform managers, and regulatory agencies in identifying the liquidity characteristics of platforms, which can contribute to strengthening market liquidity management and financial risk control and provide some reference and support for formulating sustainable development policies in the financial industry.

  7. f

    Impacts.

    • figshare.com
    • plos.figshare.com
    xls
    Updated Jun 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tianxing Zhu; Jinyang Liu; Daixing Zeng; Xuan Miao (2025). Impacts. [Dataset]. http://doi.org/10.1371/journal.pone.0326605.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 25, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Tianxing Zhu; Jinyang Liu; Daixing Zeng; Xuan Miao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The impact of policy uncertainty on A-share industry returns shows significant time-varying characteristics, amplified by industry input-output relationships. Traditional TVP-VAR models overlook network structures, leading to unquantified spillover effects and imprecise systemic risk identification. To address this problem, this study embeds industry input-output tables as matrices into Time-Varying Parameter Spatial Autoregressive Model, and Bayesian methods are innovatively introduced into this model to capture the parameters. Policy uncertainty is categorized into five dimensions—economic, fiscal, monetary, exchange rate, and trade. Empirical results reveal following key findings: On average, network spillover effects explain approximately 39% of the response of A-share industry returns to policy uncertainty. Group analysis reveal that economic and fiscal policy uncertainties exhibit positive network effects, indicating synergistic effect that amplify their impact across industries. In contrast, exchange rate and trade policy uncertainties generate negative network effects, reflecting competitive or substitution effects. Systemic risk is most pronounced in fiscal and trade policy uncertainty groups. Systemic risk increases across all policy uncertainty groups except trade, which shows a declining trend. This study provides a novel framework for understanding the dual nature of spillover effect in production networks, offering valuable insights for policymakers and investors to manage systemic risks and indentify synergistic and competitive effects in interconnected industries.

  8. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
U.S. Department of Transportation’s (USDOT) Intelligent Transportation Systems (ITS) Joint Program Office (JPO) -- Recommended citation: "U.S. Department of Transportation Intelligent Transportation Systems Joint Program Office. (2015). Integrated Dynamic Transit Operations (IDTO) Impact Assessment (IA). [Dataset]. Provided by ITS DataHub through Data.transportation.gov. Accessed YYYY-MM-DD from http://doi.org/10.21949/1504483" (2017). Integrated Dynamic Transit Operations (IDTO) Impact Assessment (IA) [Dataset]. https://data.transportation.gov/Automobiles/Integrated-Dynamic-Transit-Operations-IDTO-Impact-/as63-iaxk

Integrated Dynamic Transit Operations (IDTO) Impact Assessment (IA)

Explore at:
application/rssxml, application/rdfxml, tsv, csv, json, xmlAvailable download formats
Dataset updated
Sep 27, 2017
Dataset authored and provided by
U.S. Department of Transportation’s (USDOT) Intelligent Transportation Systems (ITS) Joint Program Office (JPO) -- Recommended citation: "U.S. Department of Transportation Intelligent Transportation Systems Joint Program Office. (2015). Integrated Dynamic Transit Operations (IDTO) Impact Assessment (IA). [Dataset]. Provided by ITS DataHub through Data.transportation.gov. Accessed YYYY-MM-DD from http://doi.org/10.21949/1504483"
License

Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
License information was derived automatically

Description

The majority of this set of data files was acquired under USDOT FHWA cooperative agreement DTFH61-12-D-00040 by Battelle and transferred to Volpe, The National Transportation Systems Center. Battelle and the Volpe Center coordinated to determine and attain the data necessary for the IDTO Impact Assessment. The Volpe Center was also under a cooperative agreement with USDOT FHWA (DTFH61-13-V-00008). Other data files within the set were generated by the Volpe Center for the purposes of the assessment.

This legacy dataset was created before data.transportation.gov and is only currently available via the attached file(s). Please contact the dataset owner if there is a need for users to work with this data using the data.transportation.gov analysis features (online viewing, API, graphing, etc.) and the USDOT will consider modifying the dataset to fully integrate in data.transportation.gov.

Search
Clear search
Close search
Google apps
Main menu